This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

We usually don’t expect the image of a teacup to turn into a cat when we zoom out. But in the world of artificial intelligence research, strange things can happen. Researchers at Germany’s Technische Universität Braunschweig have shown that carefully modifying the pixel values of digital photos can turn them into a completely different image when they are downscaled.

What’s concerning is the implications these modifications can have for AI algorithms.

Adversarial image-scaling attacks exploit image-resizing algorithms to change the appearance of an image when it is downscaled.

Malicious actors can use this image-scaling technique as a launchpad for adversarial attacks against machine learning models, the artificial intelligence algorithms used in computer vision tasks such as facial recognition and object detection. Adversarial machine learning is a class of data manipulation techniques that cause changes in the behavior of AI algorithms while going unnoticed to humans.

In a paper presented at this year’s Usenix Security Symposium, the TU Braunschweig researchers provide an in-depth review of staging and preventing adversarial image-scaling attacks against machine learning systems. Their findings are a reminder that we have yet to discover many of the hidden facets—and threats—of the AI algorithms that are becoming increasingly prominent in our daily lives.

#blog #adversarial attacks #ai research papers #artificial intelligence #machine learning

Image-scaling attacks highlight dangers of adversarial ML
1.50 GEEK