How can just wearing a specific type of t-shirt make you invisible to the person detection and human surveillance systems? Well, researchers have found and exploited the Achilles’ heel of deep neural networks — the framework behind some of the best object detectors out there (YOLOv2, Faster R-CNN, HRNetv2, to name a few).


Earlier approach:

In [1], the authors manage to get a benchmark accuracy of deception of** 57% **in real-world use cases. However, this is not the first time attempts have been made to deceive an object detector. In [2] the authors designed a way for their model to learn and generate patches that could deceive the detector. This patch, when worn on a cardboard piece (or any flat surface) could evade the person detector albeit with an accuracy of 18%

From [2]. Left: The person without a patch is successfully detected. Right: The person holding the patch is ignored.

“Confusing” or “fooling” the neural network like this is called making a physical adversarial attack or a _real-world adversarial attack. _These attacks, initially based on intricately altered pixel values, confuse the network (based on its training data) into labeling the object as “unknown” or simply ignoring it.

Authors in [2] transform images in their training data, apply an initial patch, and feed the resulting image into the detector. The object loss obtained is used to change the pixel values in the patch and aimed at minimising the _objectness _score.

#neural-networks #object-detection #deep-learning #computer-vision #deep learning

Avoiding Detection with Adversarial T-shirts
1.10 GEEK