The adoption of Artificial Intelligence is gaining momentum, but the fairness of the algorithmic structure is heavily scrutinized by the federal authorities. Despite many efforts made by the organizations to keep their AI-services and solutions fair, the permeating and pre-existing biases in AI has become challenging in recent years. Big tech organizations such as Facebook, Google, Amazon and Twitter amongst others have faced the wrath of federal agencies, over the recent months.

Owing to the death of George Floyd and the #blacklivesmatter movement, the organizations have become vigilant regarding the operational framework of their AI. With federal, national and international agencies constantly pointing at the discriminatory algorithms, the tech start-ups and organizations are struggling to make their AI-solutions fair.

But how can organizations keep clear from deploying discriminatory algorithms? What solutions will thwart such biases? The legal and statistical laws, articulated by the federal agencies to a large extent help in quelling down the algorithmic biases. For example, the existing legal standards in the laws like the Equal Credit Opportunity Act, the Civil Rights Act and the Fair Housing Act and other chartered acts, alleviate the possibility of such biases.

Moreover, the effectiveness of these standards depend upon the nature of algorithmic discrimination that organizations are subjected to. Currently, organizations are faced by two types of discriminatory framework, which is either intentional or unintentional. These are known as Disparate Treatment and Disparate Impact respectively.

Disparate Treatment is intentional employment discrimination with the highest legal penalties. Organizations must avoid getting engaged with such discrimination while adopting AI. Moreover, by analyzing the record of employee behavior, disparate treatment can be avoided.

Disparate Impact, also the unintentional discrimination occurs when policies, practices, rules or other systems that appear to be a neutral result in a disproportionate impact. For example, certain test results eliminate minority applicants unintentionally or disproportionately is Disparate Impact.

#artificial intelligence #latest news #machine learning

Fighting Discrimination in AI using Legal and Statistical Precedents
11.50 GEEK