Recently I’ve been interested in various questions relating to anti-racism, algorithmic bias, and policing.

What does anti-racist policing look like?

What do we mean by algorithmic bias and algorithmic fairness?

How can data science and machine learning practitioners ensure they are being anti-racist in their work?

Traditionally the purpose of policing has been to ensure the everyday safety of the general public. Often this has involved police forces responding to reports of suspected criminal activity. However, we may be entering a new age of policing. New technologies, including traditional data analysis as well as what might be called machine learning or AI, allow police forces to make predictions about suspected criminal activity that have not been possible until now.

We may be in a period of time where new technological developments have advanced at a faster rate than that of the regulation necessary in order to ensure the use of these technologies is safe. I think of this as the ‘safety gap’ or the ‘accountability gap’.

I hope to answer these questions relating to anti-racism, algorithmic bias, and policing, and introduce you to thinking about these issues relating to safety and accountability, using a few recent examples.

In July, MIT Technology Review published an article titled “Predictive policing algorithms are racist. They need to be dismantled.

Image for post

This article tells the story of an activist turned founder called Yeshimabeit Milner, who co-founded Data for Black Lives in 2017 to fight back against bias in the criminal justice system, and to dismantle the so-called school-to-prison pipeline.

Milner’s focus is on predictive policing tools and abuse of data by police forces.

According to the article, there are two broad types of predictive policing algorithm.

Location-based algorithms, which work by using places, events, historical crime rates, weather conditions, to create a crime ‘weather forecast’, e.g. PredPol, used by dozens of city police forces in the US.

#data #ethics #anti-racism #blacklivesmatter #machine-learning

Anti-racism, algorithmic bias, and policing: a brief introduction
1.15 GEEK