This past week, IBM announced they would no longer be selling or researching facial recognition technology. Soon after, Amazon announced they would stop selling facial recognition technology for a year, and Microsoft also announced they would stop selling this tech to law enforcement. This is huge. For a while, facial recognition technology has become more widespread, many even use face id to unlock their phones. For a widespread, rapidly growing piece of technology to be stopped dead in its tracks is huge.

The reason it’s happening is because the technology is incredibly biased, both against women and people with darker skin. However, that doesn’t explain why it’s happening now. As you might be able to see, those articles I linked are about a year old. It hasn’t been a secret that facial recognition is biased. What has changed though, is public sentiment and understanding.

As someone who has spent a lot of time on social media and has a lot of social justice minded friends, I’ve seen a lot of posts of what to do and what not to do at a protest. One of the big things I’ve seen being circulated around is to not show your face (which is easier to do since we’re all wearing face masks) and to be extremely careful what you post on social media. This all stems from the understanding that authorities will use any images at protests to try to identify certain protesters well after the fact. Along with the increased use of facial recognition has come an increased public understanding of what it does and what it can do. This increased understanding has led a much wider group of people to question its use and how ethical it is. When I presented my project about the #MeToo movement in STEM, I touched on biased technology (as a reason why we need a more diverse team behind the technology). I can say in my own experience, most people were only vaguely aware this kind of biases were baked into the technology itself. Now, I think almost everyone knows that facial recognition technology is biased, and, in the hands of the wrong people, dangerous.

Another reason why it’s pretty clear that increased public scrutiny has led to these reforms is that these biased tech reforms are not universal.

June 11th, OpenAI announced that they would be releasing an API where people could access AI models they developed. Which is great, except there has already been research showing that their Natural Language Generation is sexist and racist. Of course, what else would you expect from something trained on reddit data?

So why is the accelerator being placed on the advancement of one bit of biased tech, and the brakes being pumped on another? Simply, everyone knows about facial recognition now. With this wider, more knowledgable community, a piece of technology came under a lot more scrutiny for its bias, and companies responded accordingly. Not as many people know how natural language generation can be biased, or what those biases might look like. Therefore, no one sees anything wrong with the model being released for public use by OpenAI.

#ai #technology #facial-recognition #data-science #privacy #data analysis

What Happened with Facial Recognition?
1.10 GEEK