While computer vision has become one of the most used technologies across the globe, computer vision models are not immune to threats. One of the reasons for this threat is the underlying lack of robustness of the models. Indrajit Kar, who is the Principal Solution Architect at Accenture, took through a talk at CVDC 2020 on how to make AI more resilient to attack.

As Kar shared, AI has become the new target for attackers, and the instances of manipulation and adversaries have increased dramatically over the last few years. From companies such as Google and Tesla to startups are affected by adversarial attacks.

“While we celebrate advancements in AI, deep neural networks (DNNs)—the algorithms intrinsic to much of AI—have recently been proven to be at risk from attack through seemingly benign inputs. It is possible to fool DNNs by making subtle alterations to input data that often either remain undetected or are overlooked if presented to a human,” he said.

Type Of Adversarial Attacks

Alterations to images that are so small as to remain unnoticed by humans can cause DNNs to misinterpret the image content. As many AI systems take their input from external sources—voice recognition devices or social media upload, for example—this ability to be tricked by adversarial input opens a new, often intriguing, security threat. This has called for an increase in cybersecurity which is coming together to address the crevices in computer vision and machine learning.

#developers corner #adversarial attacks #computer vision #computer vision adversarial attack

How To Deter Adversarial Attacks In Computer Vision Models
1.30 GEEK