Last year at the Conference on Neural Information Processing Systems (NeurIPS), one of the most well-respected computer science conferences in the world, the opening panel discussion on A.I. for social good didn’t go quite as people might have expected.

“I’m not usually in spaces like this, and I’m not entirely convinced that I haven’t surreptitiously walked into a terrorist den,” began Sarah T. Hamid, a community organizer based in Los Angeles and one of the core members of the Carceral Tech Resistance Network. As Hamid explained, “like terrorists, technologists in spaces like this have a concept of what social good is. … Everybody thinks that they’re doing good things in the world … that’s the scary thing. I don’t know how it is that we have a disconnect between kids in cages and the work that’s happening in spaces like this.”

There was a smattering of applause as others in the room shifted uncomfortably in their seats. Most of the morning had been devoted to presentations on how computer science might help tackle some of the world’s toughest problems, ranging from online content curation to supporting the UN’s Sustainable Development Goals. Throughout those talks, the line between problem and solution had been clear and uncomplicated. Now Hamid was pushing the audience to think more deeply about the ways their efforts might contribute to the very problems they claim to be solving with technology.

The moderator asked Hamid how she thinks computer scientists should handle competing definitions of social good, citing the various mathematical formalisms that have been developed to grapple with ideas like “fairness” in A.I. Again, Hamid responded with an answer that few people might have anticipated.

#refusal #algorithms #data-science #abolition #technology

To Build a Better Future, Designers Need to Start Saying ‘No’
1.10 GEEK