As artificial intelligence (AI) is increasingly transforming our world, a new paper suggests a way to re-examine the society we’re already living in now to chart a better way forward. “Computer systems embody values,” explained paper co-author Shakir Mohamed, “And to build a deeper understanding of values and power is why we turn to the critical theory and especially decolonial theories.”

The paper defines “Decolonisation” as the “the intellectual, political, economic and societal work concerned with the restoration of land and life following the end of historical colonial periods,” the paper asserts. It seeks to root out the vestiges of this thinking that are still with us today, including such unhealthy traits as “Territorial appropriation, exploitation of the natural environment and of human labor, and direct control of social structures are the characteristics of historical colonialism.”

Mohamed is a research scientist in statistical machine learning and AI at DeepMind, an AI research company. He teamed up with DeepMind senior research scientist William Isaac, and with Marie-Therese Png, a Ph.D. candidate studying algorithmic coloniality at the Oxford Internet Institute. Together they’ve produced a 28-page paper exploring a role for two kinds of theories — both post-colonial and decolonial — “in understanding and shaping the ongoing advances in artificial intelligence.”

The paper includes a warning that AI systems “pose significant risks, especially to already vulnerable peoples.” But in the end, it also attempts to provide some workable solutions.

Critical Perspectives

Weapons_of_Math_Destruction book cover (via Wikipedia)

The researchers’ paper cites Cathy O’Neil’s 2016 book “Weapons of math destruction,” which argues “big data increases inequality and threatens democracy,” in high-stakes areas including policing, lending, and insurance.

For algorithmic (or automated) oppression in action, the paper points to “predictive” surveillance systems that “risk entrenching historical injustice and amplify[ing] social biases in the data used to develop them,” as well as algorithmic “decision systems” used in the U.S. criminal justice system “despite significant evidence of shortcomings, such as the linking of criminal datasets to patterns of discriminatory policing.”

Commenting on the work, VentureBeat suggests the authors “incorporate a sentiment expressed in an open letter Black members of the AI and computing community released last month during Black Lives Matter protests, which asks AI practitioners to recognize the ways their creations may support racism and systemic oppression in areas like housing, education, health care, and employment.”

But though it’s a very timely paper, that’s mostly a coincidence, says co-author William Isaac, a senior research scientist at DeepMind. He told me the paper had its roots in a blog post by Shakir Mohamed almost two years ago outlining some of the initial ideas, influenced by work in related areas like data colonialism. Then last year co-author Marie-Therese Png had helped organize a panel during Oxford’s Intercultural Digital Ethics Symposium, which led to the paper.

In the paper, the researchers provide a stunning example of a widely-used algorithmic screening tool for a “high-risk care management” healthcare program in 2002 which, it turned out “relied on the predictive utility of an individual’s health expenses.” The end result? Black patients were rejected for the healthcare program more often than white patients, “exacerbating structural inequities in the US healthcare system.”

The paper also looks at how algorithm-using industries and institutional actors “take advantage of (often already marginalized) people by unfair or unethical means,” including the “ghost workers” who label training data, a phenomenon which involves populations along what one researcher called “the old fault lines of colonialism.” And the paper provides examples of what it calls “clearly exploitative situations, where organizations use countries outside of their own as testing grounds — specifically because they lack pre-existing safeguards and regulations around data and its use, or because the mode of testing would violate laws in their home countries.”

They cite the example of Cambridge Analytica, which according to Nanjala Nyabola’s “Digital Democracy, Analogue Politics” beta-tested algorithms for influencing voters during elections in Kenya and Nigeria in part because those countries had weak data protection laws.

#culture #machine learning #profile #algorithms

Researchers Look at How ‘Algorithmic Coloniality’
1.10 GEEK