This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

It’s not hard to tell that the image below shows three different things: a bird, a dog, and a horse. But to a machine learning algorithm, all three might the same thing: a small white box with a black contour.

This example portrays one of the dangerous characteristics of machine learning models, which can be exploited to force them into misclassifying data. (In reality, the box could be much smaller; I’ve enlarged it here for visibility.)

Machine learning algorithms might look for the wrong things in images

This is an example of data poisoning, a special type of adversarial attack, a series of techniques that target the behavior of machine learning and deep learning models.

If applied successfully, data poisoning can provide malicious actors backdoor access to machine learning models and enable them to bypass systems controlled by artificial intelligence algorithms.

#what is... #adversarial attacks #artificial intelligence #machine learning

What is machine learning data poisoning?
1.05 GEEK