Despite being a mathematician’s dream word, algorithms — or sets of instructions that humans or, most commonly, computers execute — have cemented themselves as an integral part of our daily lives.

They are working behind the scenes when we search the web, read the news, discover new music or books, apply for health insurance, and search for a date. To put it simply, algorithms are a way to automate routine or information-heavy tasks.

However, some “routine” tasks have serious implications, such as determining credit scores, cultural or technical “fit” for a job, or the perceived level of criminal risk. While these algorithms are largely designed with society’s benefit in mind, algorithms are mathematical or logical models meant to reflect reality — which is often more nuanced than can be captured in a model.

For instance, some students aren’t eligible for loans because a lending model deems them too risky by virtue of their zip codes; which can result in an endless spiral of education and poverty challenges.

Algorithms can be incredibly helpful for society by improving human services, reducing errors, and identifying potential threats. However, algorithms are built by humans and thus reflect their creators’ imperfections and biases.

To ensure algorithms help society and do not discriminate, disparage, or perpetuate hate, we as a society, need to be more transparent and accountable in how our algorithms are designed and developed. Considering the importance of algorithms in our daily lives, here, a few examples of biased algorithms and how we can improve algorithm accountability.

How computers learn biases

Much has been written on how humans’ cognitive biases influence everyday decisions. Humans use biases to reduce mental burden, often without cognitive awareness. For instance, we tend to think that the likelihood of an event is proportional to the ease with which we can recall an example of it happening. So if someone decides to continue smoking based on knowing a smoker who lived to be 100 despite significant evidence demonstrating the harms of smoking, that person is using what is called the availability bias.

Humans have trained computers to take over routine tasks for decades. Initially, these tasks were for very simple tasks, such as calculating a large set of numbers. As the computer and data science fields have expanded exponentially, computers are being asked to take on more nuanced problems through new tools (e.g., machine learning). Over time, researchers have found that algorithms often replicate and even amplify the prejudices of those who create them.

Since algorithms require humans to define exhaustive, step-by-step instructions, the inherent perspectives and assumptions can unintentionally build in bias. In addition to bias in development, algorithms can be biased if they are trained on incomplete or unrepresentative training data. Common facial recognition training datasets, for example, are 75% male and 80% white, which leads them to demonstrate both skin-type and gender biases, resulting in higher error rates and misclassification.

On a singular level, a biased algorithm can negatively impact a human life significantly (e.g., increasing the prison time based on race). When spread across an entire population, inequalities are magnified and have lasting effects on certain populations. Here are a few examples.

#policy #algorithms #technology #equality #social-justice #algorithms

Reducing Algorithmic Bias Through Accountability and Transparency
1.10 GEEK