The often underestimated piece to successful Artificial Intelligence

The often underestimated piece to successful Artificial Intelligence

The first generation of AI has picked up on human biases. Among many disturbing cases of biased AI systems resulting in discriminatory outcomes, the most heart-breaking ones were cases involving unfair elongation of prison sentence, unfair credit card decision, and home appraisal outcomes.

The first generation of AI has picked up on human biases. Among many disturbing cases of biased AI systems resulting in discriminatory outcomes, the most heart-breaking ones were cases involving unfair elongation of prison sentence, unfair credit card decision, and home appraisal outcomes. So, how does bias get into AI systems?

There are two broad reasons:

  1. Minimal consideration to human-centric design: Up until 2010, AI systems were notoriously difficult to build. Most of the focus of first-generation AI systems was on the engineering aspects - getting an AI proof of concept working and scaling in production. (This is still a tough challenge through this day. Stay tuned to more information on scaling AI). Solving engineering and data problems was a humongous task. AI developers and designers were happy to see an AI system predicting the next unhappy customer or the next customer likely to leave the brand. This added business value and all was fine in AI land until we started using AI inselection scenarios such as_ “_Who is most likely to pay back a loan?”, “Who is most likely to be a better homeowner?”, etc. Disaster ensued when AI started to pick up on all the biases that existed in the non-digital world and started discriminating based on gender, ethnicities, marital status, age, and a wide plethora of characteristics.
  2. Non-existent structures to account for bias: There were no quantifiable measures to validate that the AI system is unbiased and such measures weren’t mandated. People involved in building AI did not often consist of ethics specialists, anthropologists, or social scientists. One of the largest computer vision projects, ImageNet, was originally trained using Amazon Turks where the data labelers could be anyone on the planet. This introduced tons of disturbing biases, prejudices, and stereotypes into the system, that AI eventually picked up and made obvious.

While this is by no means an excuse, it does point to the key problem — almost no focus was given to ensuring the moral, social, and responsible aspect of AI- often termed Ethical AI.

A 2019 Gartner study reported that by 2022, 30% of the companies will invest in explainable ethical AI, from_ almost none__ in 2019. 30% is still a dangerously lower number given the societal, cultural, psychological, and organizational ramifications, biased AI can cause._

In summary, the presence of unregulated human actors and the absence of structures founded on ethics, morality, and fairness have been the source of bias in AI. While addressing bias at the human level is a slow continual process, are we trapped with biased systems until we free human beings of biases? The answer is No. I believe it is easier to address bias in AI than in human beings. Let me explain why...

The Silver Lining:

AI systems, (through their bias-related mishaps) have indirectly served as a platform to bring to the forefront, the systemic biases that have slipped through for generations. What was once perceived as a “theoretical accusation” with no evidence is now provable because of a faulty AI model. Data used to train this model adds concrete evidence which was previously collected through years of anthropological research and vetting.

If anything, AI has created increased evidence backed awareness of areas of bias . A single AI mishap such as Amazon’s recruiting AI model favoring male resumes created widespread awareness of an issue that has been plaguing women for generations. Now companies are addressing that problem faster than ever.

Awareness is the first step to solving a problem and the unfortunate AI accidents have provided the impetus and data points that were lacking before.

artificial-intelligence responsible-ai bias

What is Geek Coin

What is GeekCash, Geek Token

Best Visual Studio Code Themes of 2021

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Start a Career in Machine Learning and Artificial Intelligence

Enroll now at best Artificial Intelligence training in Noida, - the best Institute in India for Artificial Intelligence Online Training Course and Certification.

7 Ways Artificial Intelligence Can Help Make Your Time at Home

Enroll now at best Artificial Intelligence training in Noida, - the best Institute in India for Artificial Intelligence Online Training Course and Certification.

Artificial Intelligence: The Future of Modern Life Or a Cruel Deterrence?

With the ever-increasing prevalence of artificial intelligence altering the way the world operates, we are forced to question whether this transition is ultimately beneficial.

Solving the Problem of Bias in Artificial Intelligence

In this article, we’ll dot the i’s, zooming in on the concept, root causes, types, and ethical implications of AI bias, as well as list practical debiasing techniques shared by our AI consultants that worth including in your AI strategy.

Artificial Intelligence (AI), Bias and Social Injustice

AI and Social Injustice. As organizations start to deploy artificial intelligence (AI) systems, debate and concerns about bias in AI are gaining significance