Forex markets

Scientists Made AI "See" What Isn’t There: Machine Vision Proves Vulnerable to Adversarial Attacks

Scientists Made AI "See" What Isn’t There: Machine Vision Proves Vulnerable to Adversarial Attacks

Scientists Made AI "See" What Isn’t There: Machine Vision Proves Vulnerable to Adversarial Attacks

Researchers from the University of North Carolina have developed a new method called RisingAttacK , which manipulates data fed into artificial intelligence systems, causing them to misinterpret images.

This technique exploits vulnerabilities in machine vision by making imperceptible changes to images—changes that are invisible to the human eye but enough to confuse AI.
The implications of this discovery are profound, as it highlights potential risks in areas where AI plays a critical role, such as autonomous vehicles, healthcare diagnostics, and security systems.
Scientists Made AI

Scientists Made AI "See" What Isn’t There: Machine Vision Proves Vulnerable to Adversarial Attacks

How RisingAttacK Works

RisingAttacK operates through a series of precise steps designed to identify and exploit weaknesses in AI image recognition systems:

Feature Identification:
The program scans an image to detect all visual features and determines which ones are most crucial for the AI’s decision-making process. For example, in a photo of a car, specific pixels or patterns might heavily influence whether the AI recognizes it as a vehicle.

Sensitivity Analysis:
Once key features are identified, RisingAttacK calculates how sensitive the AI system is to alterations in these elements. By understanding the threshold at which minor modifications affect the AI’s judgment, attackers can introduce subtle distortions that go unnoticed by humans.

Minimal Manipulation:
The final step involves applying minimal, almost undetectable changes to the image. These tweaks are carefully crafted to deceive the AI while leaving the picture visually unchanged to the human observer.

The result?
Two seemingly identical images—one that the AI correctly interprets and another that baffles it entirely. For instance, an autonomous car’s AI might fail to recognize a stoplight or even pedestrians due to these manipulations.

Real-World Implications

The potential consequences of adversarial attacks like RisingAttacK are alarming:

Autonomous Vehicles:
Self-driving cars rely on computer vision to navigate roads safely. If hackers manipulate traffic signals, road signs, or pedestrian detection systems, accidents could occur with catastrophic outcomes.

Healthcare Diagnostics:
Medical imaging tools powered by AI help doctors diagnose conditions like fractures, tumors, or infections. However, adversarial attacks could lead to incorrect diagnoses, jeopardizing patient safety and trust in AI-driven healthcare solutions.

Security Systems:
Facial recognition and surveillance technologies depend on accurate object and face detection. RisingAttacK could render these systems ineffective, enabling unauthorized access or evasion of monitoring.

Broader AI Applications:
Researchers are now exploring whether similar techniques can compromise large language models (LLMs) and other AI systems. Such vulnerabilities could undermine confidence in AI across multiple industries.


Testing Against Leading AI Models
To demonstrate the effectiveness of RisingAttacK, researchers tested their method against four widely used computer vision programs: ResNet-50 , DenseNet-121 , ViTB , and DEiT-B . Remarkably, the technology successfully deceived all four systems without exception. According to Tianfu Wu, one of the study’s authors, “We wanted to find an efficient way to hack computer vision systems because they’re often deployed in contexts that impact human health and safety.”

This underscores the urgency of addressing these vulnerabilities before malicious actors exploit them in real-world scenarios.

The Importance of Identifying Weaknesses

The research team emphasizes that exposing such flaws is a necessary step toward building more robust AI systems. By understanding how adversarial attacks work, developers can create countermeasures to defend against them.

Currently, the scientists are working on strategies to enhance AI resilience against RisingAttacK and similar threats.

Additionally, the team plans to investigate whether analogous methods can target other types of AI, including natural language processing models. This broader exploration aims to uncover the full scope of potential risks posed by adversarial attacks.
A Call for Vigilance
The discovery of RisingAttacK serves as a wake-up call for both developers and users of AI technologies. While artificial intelligence continues to revolutionize industries, its susceptibility to manipulation highlights the need for ongoing vigilance and innovation in cybersecurity.

As Tianfu Wu aptly puts it, “Only by acknowledging the existence of threats can we develop reliable defenses against them.”
In an era where AI increasingly influences critical aspects of daily life, ensuring its integrity must remain a top priority.

1000 Characters left


Author’s Posts

Image

Forex software store

Download Our Mobile App

Image
FX24 google news
© 2025 FX24 NEWS: Your trusted guide to the world of forex.
Design & Developed by FX24NEWS.COM HOSTING SERVERFOREX.COM sitemap