The results of any AI developed today is entirely dependent on the data on which it trains. If the data is distributed--intentionally or not--with a bias toward any category of data over another, then the AI will display that bias. What is a better way forward to handle this possibility toward bias when the datasets involve human beings?
Back in 2013, Microsoft came up with the exciting idea of introducing the general public to Artificial Intelligence.
Tay, the teenage chatbot, was launched into the Twittersphere to interact with the platform’s audience. Being a young and hip chatbot, it was programmed to use modern slang language instead of formal English. Tay was to mimic those that interacted with her so she could learn the human ways.
But this experiment didn’t turn out the way Microsoft expected it would. Trolls on the social media site took advantage of Tay’s “repeat after me” function and turned her into one of the most bigoted profiles on the forum.
16 hours after it was launched, Microsoft shelved the entire project. Tay was a massive PR disaster for Microsoft and a terrible ambassador of Artificial Intelligence.
But if one is to the silver lining in this absolute catastrophe —the chatbot exposed the dangers of human-induced AI bias.
Information technology has reduced the cost of doing business, improved healthcare, and made communication more efficient. Artificial Intelligence promises to do that on a whole new level by automating repetitive tasks and freeing up humans to pursue more creative ventures.
“Unlike previous technologies, AI can make vital decisions,” explains Alex Reynolds of Namobot, a website that uses Big Data to create catchy business names. “It’s extremely important for algorithms to be completely free of bias and partiality.”
While the sci-fi franchises, such as The Terminator and Westworld, depict murderous robots as ‘AI gone wrong’ —the threat is much more real than cyborg assassins in leather jackets.
There’s enough evidence to suggest that algorithms are discriminating against women and ethnic minorities. In fact, they’re playing a vile role in promoting racial discrimination.
In January, Robert Julian-Borchak was handcuffed and taken into custody on his front lawn on charges of felony and larceny. The Wayne County police accused him of stealing five timepieces worth $3800 based on evidence provided by a facial recognition algorithm.
But there was a slight problem. Robert did not commit the crime.
Why Don’t AI Coders Study AI Ethics? We hear about the societal affects of AI, brought about by willful ignorance of ‘techies’ or ‘tech bros’. So I thought about, what keeps AI coders so distant from the ethics field?
The 12-Step Guide To Design Ethical AI Frameworks. Developing ethical AI has been a severe concern since the advent of this technology
In a perfect world, AI should be developed to avoid unethical issues, but that may be unlikely since those issues cannot always be predicted. In an automated society, human beings will have the responsibility to support and protect each other more than today.
We don’t consider ethics as part of the AI creation and development process, even for Narrow AI, we could still unleash tremendous harm on society, even sometimes without realising it.
Trust: Why AI Ethics and MLOps go hand-in-hand. The worst thing that can happen to your AI endeavor, is that you end up damaging people’s lives through good intentions.