When AI systems are launched and when they break, especially when they fail in loud and embarrassing ways, experts in AI ethics appear in the press. Computer Science professors share their updated curricula and favorite books critical of relying on algorithms. We hear about the societal affects of AI, brought about by willful ignorance of ‘techies’ or ‘tech bros’. So I thought about, what keeps AI coders so distant from the ethics field?

‘State of the Art’ at all costs

There’s a Hacker News comment which I’ve kept bookmarked since January, which is the peak of AI pushback:

I am worried about the recent trend of “ethical AI”, “interpretable models” etc. IMO it attracts people that can’t come with SOTA [State of the Art] advances in real problems and its their “easier, vague target” to hit and finish their PhDs while getting published in top journals.

Those same people will likely at some point call for a strict regulation of AI using their underwhelming models to keep their advantage,

faking results of their interpretable models, then acting as arbiters and judges of the work of others, preventing future advancements of the field.

https://news.ycombinator.com/item?id=21959105

Let’s not stray too far into Ethicists as True Villains theory. I wanted to explain this thinking that the ethics field doesn’t follow the rules and currency of the AI field. Commercial AI projects have so much hype, that researchers’ conversation revolves around metrics. If someone promotes a new approach but can’t point to a metric to prove a ‘State of the Art’ achievement, their results are not valuable to the author of this post. The ethicist is cast to the familiar role of someone who is using hype, or not technical enough.

#ethical-ai #ai-ethics #explainable-ai #machine-learning

Why Don’t AI Coders Study AI Ethics?
1.05 GEEK