Removing the human element from the data collection process for machine learning model training can introduce security vulnerabilities.

Machine learning is the darling of business operations, but breaking it is a promising and innovative field. If trying to mess up ML models is puzzling, the collaborative research team from MIT, U of Maryland, U of Illinois Urbana Champaign, and U of California Berkeley wants you to imagine the possibilities.

In a paper published last month titled “Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses,” researchers sought to categorize and examine a wide range of dataset vulnerabilities and exploits and suggested approaches for defending against these threats.

See also: Modern Applications Must Consider Container Security Risks

As part of their work, the researchers summarized the techniques they used to attack datasets. The research lays foundations that help predict and prevent security loopholes businesses and other organizations have because of the rapidly evolving nature of artificial intelligence models.

These purposeful attacks include things like poisoning classification data, causing the machine to misrepresent or misclassify items. It could also involve perturbing the outputs of trained models causing aberrant results. The researchers explored these types of backdoor attacks to provide a better understanding of how to prevent them.

#artificial intelligence technologies #big data #big data analysis tools #data security #machine learning #cybersecurity #ml model training

Stressing Machine Learning Security to Strengthen It
1.05 GEEK