Noah  Rowe

Noah Rowe

1593486180

Experts Denounce Racial Bias of Crime-Predictive Facial-Recognition AI

An open letter signed by experts in the field from MIT, Microsoft and Google aim to stop the ‘tech to prison’ pipeline.

More than 1,000 technology experts and academics from organizations such as MIT, Microsoft, Harvard and Google have signed an open letter denouncing a forthcoming paper describing artificial intelligence (AI) algorithms that can predict crime based only on a person’s face, calling it out for promoting racial bias and propagating a #TechtoPrisonPipeline.

The move shows growing concern over the use of facial-recognition technology by law enforcement as well as support for the Black Lives Matter movement, which has spurred mass protests and demonstrations across the globe after a video went viral depicting the May 25 murder of George

Floyd by a former Minneapolis police officer.Threatpost Webinar Promotion: The Enemy Within: How Insider Threats Are Changing

The algorithms are outlined in a paper written by researchers at Harrisburg University in Pennsylvania called “A Deep Neural Network Model to Predict Criminality Using Image Processing” that is soon to be published by Springer Publishing based in Berlin, Germany. The paper describes an “automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal,” according to a press release about the research.

The letter, published on the Medium content-sharing platform of the Coalition for Critical Technology, demands that Springer rescind the offer for publication as well as to publicly condemn “the use of criminal justice statistics to predict criminality.” Experts also are asking Springer to acknowledge its own behavior in “incentivizing such harmful scholarship in the past,” and request that “all publishers” refrain from publishing future similar studies.

While crime-prediction technology based on computational research in and of itself is not specifically racially biased, it “reproduces, naturalizes and amplifies discriminatory outcomes,” and is also “based on unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years,” experts wrote in their letter.

They outline three key ways these type of algorithms are problematic and result in discriminatory outcomes by the justice system. The first refutes a claim in the press release about the paper that the algorithms can “predict if someone is a criminal based solely on a picture of their face,” with “80 percent accuracy and with no racial bias.” This is impossible because the very idea of “criminality” itself is racially biases, researchers wrote in the letter.

“Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data,” researchers wrote.

#privacy #algorithms #artificial intelligence #black lives matter #crime #facial recognition #google #harrisburg university #harvard #microsoft #mit #racial bias #racial profiling #technology

Experts Denounce Racial Bias of Crime-Predictive Facial-Recognition AI