Most people have more trust in a Computer Algorithm than in other humans. Unfortunately, this is also the case when an expert makes well-founded statements compared to the algorithms. Most people don’t know what’s the basis for a decision a computer made. Many people believe they are unconditional. But it’s only partially true. Algorithms are developed by humans. Some may say with Machine Learning the bias of a human developer is being dismissed. Let’s take a brief look at Machine Learning and Deep Neural Networks.

A good Machine Learning Model needs good data. In supervised learning, you need a good set of classified training data. If your training set has a bias, the model will have a bias, too. What does that mean? Let’s have a look at some cases with biased training data and the consequences of their usage.

Would you get the job?

Wouldn’t it be great if the bias of humans would be eliminated at the hiring process? It doesn’t matter if someone is from a foreign country, the ethnic background and gender don’t matter. Just skills matter. The perfect job for an algorithm, isn’t it? That’s what Amazon thought back in 2014. An algorithm was used to pre-filter candidates for open positions. In 2015, they realized that their algorithm preferred males. Even if female candidates would have better qualifications they were not recommended by the system. Reason: The training set was biased! In a male-dominated area, the model has concluded that women are rather unsuitable. In 2015 Amazon stopped using its algorithm.

I know what you’ll do and you’ll do it again!

An AI named COMPAS promises to predict the recidivism rate in crimes. Many states in the US are using COMPAS. With a set of about 137 questions, the algorithms calculate if someone will commit a crime again. Based on this people will face higher sentences. Consequences are, judges don’t take a look at the criminal records. They are using the score COMPAS returns. This leads to a dangerous feedback loop because COMPAS gets no feedback about wrong accuses. And the training set may be biased as it’s based on past cases. The accuracy is questionable. You get deeper insights by Pro Publica including a GitHub repository with a nice Jupyter Notebook.

#responsibility #ai #ethics #xai

Ethics and (Explainable) AI
1.15 GEEK