Google advertisements and Netflix recommendations have brought more widespread awareness of algorithms, and how they learn from our behaviours. You may have experienced eerily relevant advertisements pop-up on your feed after googling an item (or even just talking about it). As users of data-collecting applications, our identifying information and behaviours (i.e., data) serve as the input for algorithms, a set of instructions, to perform tasks like generating personalized recommendations or targeted advertisements.

Algorithms are silently automating previously manual decision-making processes that have allowed for scalability and optimization of tasks. However, when algorithms are built upon biased data, the byproduct of discriminatory decision-making processes, they inherit that bias. In benign applications like content recommendations, the impact of a biased algorithm is trivial* — perhaps Fuller House would be recommended when you actually wanted to watch _Beyoncé: The Formation World Tour _for the tenth time. In more pervasive applications the impact, however, can be fatal. This blog post serves to signify the importance of data literacy in the Black community and highlight how unregulated, biased algorithms may amplify the voice of discrimination and disparity.

*Triviality is subjective but was used in this context to draw a distinction between the ability to cause bodily harm or not.

#data #machine-learning #artificial-intelligence #bias #data-science

Death By Bias. How Algorithms Systemize Discriminatory Practices.
1.05 GEEK