Back in 2013, Microsoft came up with the exciting idea of introducing the general public to Artificial Intelligence.

Tay, the teenage chatbot, was launched into the Twittersphere to interact with the platform’s audience. Being a young and hip chatbot, it was programmed to use modern slang language instead of formal English. Tay was to mimic those that interacted with her so she could learn the human ways.

But this experiment didn’t turn out the way Microsoft expected it would. Trolls on the social media site took advantage of Tay’s “repeat after me” function and turned her into one of the most bigoted profiles on the forum.

16 hours after it was launched, Microsoft shelved the entire project. Tay was a massive PR disaster for Microsoft and a terrible ambassador of Artificial Intelligence.

But if one is to the silver lining in this absolute catastrophe —the chatbot exposed the dangers of human-induced AI bias.

How AI Inherited Human Bias

Information technology has reduced the cost of doing business, improved healthcare, and made communication more efficient. Artificial Intelligence promises to do that on a whole new level by automating repetitive tasks and freeing up humans to pursue more creative ventures.

“Unlike previous technologies, AI can make vital decisions,” explains Alex Reynolds of Namobot, a website that uses Big Data to create catchy business names. “It’s extremely important for algorithms to be completely free of bias and partiality.”

While the sci-fi franchises, such as The Terminator and Westworld, depict murderous robots as ‘AI gone wrong’ —the threat is much more real than cyborg assassins in leather jackets.

There’s enough evidence to suggest that algorithms are discriminating against women and ethnic minorities. In fact, they’re playing a vile role in promoting racial discrimination.

In January, Robert Julian-Borchak was handcuffed and taken into custody on his front lawn on charges of felony and larceny. The Wayne County police accused him of stealing five timepieces worth $3800 based on evidence provided by a facial recognition algorithm.

But there was a slight problem. Robert did not commit the crime.

#ai #bias #crime #ethics

Overcoming the Racial Bias in AI
1.45 GEEK