I recently came across an excellent article from Signal where they introduced a new feature that gives users the ability to automatically blur faces—incredibly useful in a time when protestors and demonstrators need to communicate while protecting their identities.In the article, Signal also hinted at technologies they’re using, which are strictly platform-level libraries. For iOS, I would guess they have used Vision , an API made by Apple to perform a variety of image and video processing.In this article, I’ll use Apple’s native library to create an iOS application that will pixelate faces on any given image.
This is a look at the final result:
I have included code in this article where it’s most instructive. Full code and data can be found on my GitHub page. Let’s get started.
Apple has been active in providing iOS developers with powerful APIs centered on computer vision and even other AI disciplines (i.e. NLP). They have continuously improved them by trying to represent the complex spectrum of use cases,** from gender differences to racial diversity**. I remember the first version of the Face Detection API being very bad at detecting darker skinned faces. They have since improved it, but there is no perfect system so far, and detection is not 100% accurate.
#vision #ios-app-development #heartbeat #swift #computer-vision