1568880580
In this tutorial, you'll learn how to build a simple face detection utility from Python to Go. Build a tool to detect faces in a picture
For the design part, I describe how to:
On the technical side, I am using the following technologies:
Note: Some of the terms such as domain, application, and infrastructure refer to the concepts from Domain Driver Design (DDD) or the hexagonal architecture. For example, do not consider the infrastructure as boxes and wires, but see it as a service layer. The infrastructure represents everything that exists independently of the application.
Disclaimer: I am using those concepts to illustrate what I do; This is not a proper DDD design nor an authentic hexagonal architecture.
Those layers can represent the architecture of the tool:
The basic principle is that every layer is a “closed area”; therefore, it is accessible through API, and every layer is testable independently. Different paragraphs of this post describe each layer.
The “actor” here is a simple CLI tool. It is the main package of the application (and in go the main package is the package main
); In the rest of the article, I reference it as “the actor”.
Implementing the business logic with a neural network
The core functionality of the tool is to detect faces on a picture. I am using a neural network to achieve this. The model I have chosen is Tiny YOLO v2, which can perform real-time object detection.
This model is designed to be small but powerful. It attains the same top-1 and top-5 performance as AlexNet but with 1/10th the parameters. It uses mostly convolutional layers without the large fully connected layers at the end. It is about twice as fast as AlexNet on CPU making it more suitable for some vision applications.
I am using the “tiny” version, which is based on the Darknet reference network and is much faster but less accurate than the regular YOLO model.
The model is just an “envelope.” It needs some training to be able to detect some objects. The objects it can detect is dependant of its knowledge. The weights tensors represent its knowledge. To detect faces, we need to apply the model to the picture with a knowledge (some weights) able to recognize faces.
The model is the envelope; it can detect many objects. The knowledge that makes it able to detect faces is in the weights.
By luck, an engineer named Azmath Moosa has trained the model and released a tool called azface. The project is available on GitHub in LGPLv3 but, it does not contain the sources of the tool (only a Windows binary and some DLL are present). However, what I am interested in is not the tool as I am building my own. What I am seeking now is the weights, and the weights are present in the repository as well.
Disclaimer: the tool we are building is for academic purpose. I am not competing with Azmath’s tool in any way.
First, we clone the repository to have the weights locally:
$ git clone https://github.com/azmathmoosa/azFace
The weights are this heavy file of 61Mb: weights/tiny-yolo-azface-fddb_82000.weights
.
Now, we need to combine the knowledge and the model. Together, they constitute the core functionality of our domain.
The business logic should be as independent as possible of any framework. The best way to represent the neural network is to be as close as possible as its definition; The original implementation of the YOLO model (from “darknet”) is in C; There are other reimplementations in Tensorflow, Keras, Java, …
I am using ONNX as a format for the business logic; It is an Intermediate Representation that is, as a consequence, independant of a framework.
To create the ONNX format, I am using Keras with thei following tools:
[yad2k](https://github.com/allanzelener/yad2k.git)
to create a Keras model from YOLO;[keras2onnx](https://pypi.org/project/keras2onnx/)
to encode it into ONNX.The workflow is:
yad2k keras2onnx
darknet config + weights --------> keras model --------------> onnx model
This script creates a Keras model from the config and the weights of azface
./yad2k.py \
../azFace/net_cfg/tiny-yolo-azface-fddb.cfg \
../azFace/weights/tiny-yolo-azface-fddb_82000.weights \
../FACES/keras/yolo2.h5
It generates a pre-trained h5 version of the tiny YOLO v2 model, able to find faces.
Then, analyzing the resulting model with this code snippet gives the following result:
from keras.models import load_model
keras_model= load_model('../FACES/keras/yolo.h5')
keras_model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 416, 416, 3) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 416, 416, 16) 432
_________________________________________________________________
...
_________________________________________________________________
conv2d_9 (Conv2D) (None, 13, 13, 30) 30750
=================================================================
Total params: 15,770,510
Trainable params: 15,764,398
Non-trainable params: 6,112
_________________________________________________________________
The resulting model looks ok.
To generate the ONNX representation of the model, I use keras2onnx:
import onnxmltools
import onnx
import keras2onnx
from keras.models import load_model
keras_model= load_model('../FACES/keras/yolo.h5')
onnx_model = keras2onnx.convert_keras(keras_model, name=None, doc_string='', target_opset=None, channel_first_inputs=None)
onnx.save(onnx_model, '../FACES/yolo.onnx')
It is interesting to visualize the result of the conversion. I am using the tool netron
which have a web version.
Here is an extract of the picture it generates:
I made a copy of the full representation here if you want to see how the model looks.
To validate our future infrastructure, I need a simple test.
What I am doing is applying the model on a zero value and save the result. I will do the same once the final infrastructure is up and compare the results.
from keras.models import load_model
import numpy as np
keras_model= load_model('../FACES/keras/yolo.h5')
output = keras_model.predict(np.zeros((1,416,416,3)))
np.save("../FACES/keras/output.npy",output)
Now, let’s move to the infrastructure and application part.
Infrastructure: Entering the Go world
No surprises here: the infrastructure I am using is made of [onnx-go](https://github.com/owulveryck/onnx-go)
to decode the onnx file, and Gorgonia to execute the model. This solution is an efficient solution for a tool; at runtime, it does not need any of the dependencies used to build the network (no more Python, Tensorflow, Conda, etc.). It gives the end-user of the tool a much better experience.
We’ve seen its model represents the neural network. The SPI should implement a model to fulfill the contract and understand the ONNX Intermediate Representation (IR). Onnx-go’s [Model](https://godoc.org/github.com/owulveryck/onnx-go#Model)
object is a Go structure that acts as a receiver of the neural network model.
The other service required is a computation engine that understands and executes the model. Gorgonia assumes this function.
The actor uses those services. A basic implementation in Go is (note the package is main
):
import (
"github.com/owulveryck/onnx-go"
"github.com/owulveryck/onnx-go/backend/x/gorgonnx"
)
func main() {
b, _ := ioutil.ReadFile("../FACES/yolo.onnx")
backend := gorgonnx.NewGraph()
model := onnx.NewModel(backend)
model.UnmarshalBinary(b)
}
To use the model, we need to interact with its inputs and output. The model takes a tensor as input. To set this input, the onnx-go
library provides a helper function called [SetInput](https://godoc.org/github.com/owulveryck/onnx-go#Model.SetInput)
.
For the output, a call to [GetOutputTensors()](https://godoc.org/github.com/owulveryck/onnx-go#Model.GetOutputTensors)
extracts the resulting tensors.
t := tensor.New(
tensor.WithShape(1, 416, 416, 3),
tensor.Of(tensor.Float32))
model.SetInput(0, t)
The actor can use those methods, but, as the goal of the application is to analyze pictures, the application is going to encapsulate them. It provides a better user experience for the actor (the actors will probably not want to mess up with tensors).
We can now test the infrastructure to see if the implementation is ok. We set an empty tensor, compute it with Gorgonia, and compare the result with the one saved previously:
I wrote a small test
file in the go format; for clarity, I am not copying it here, but you can find it in this gist.
# go test
PASS
ok tmp/graph 1.054s
Note: The ExprGraph used by Gorgonia can also be represented visually with Graphviz. This code generates the dot representation:
exprGraph, _ := backend.GetExprGraph()
b, _ := dot.Marshal(exprGraph)
fmt.Println(string(b))
}
(the full graph is here)
The infrastructure is ok, and is implementing the SPI! Let’s move to the application part!
Writing the application in Go
Let’s start with the interface of the application. I create a package gofaces
to hold the logic of the application. It is a layer that adds some facilities to communicate with the outside world. This package is instantiable by anything from a simple CLI to a web service.
This function takes an image as input; The image is transferred to the function with a stream of bytes (io.Reader
). It let the possibility for the end-user to use a regular file, to get the content from stdin, or to build a web service and get the file via HTTP. This function returns a tensor usable with the model; it also returns an error if it cannot process the file.
Note the full signature of the GetTensorFromImage
function can be found on GoDoc
If we switch back to actor implementation, we can now set an input picture with this code: (I skip the errors checking for clarity):
func main() {
b, _ := ioutil.ReadFile("../FACES/yolo.onnx")
// Instanciate the infrastructure
backend := gorgonnx.NewGraph()
model := onnx.NewModel(backend)
// Loading the business logic (the neural net)
model.UnmarshalBinary(b)
// Accessing the I/O through the API
inputT, _ := gofaces.GetTensorFromImage(img)
model.SetInput(0, inputT)
}
To run the model, we call the function [backend.Run()](https://blog.owulveryck.info/2019/08/16/Gorgonia%20fulfills%20the%20[%60ComputationBackend%60](https://godoc.org/github.com/owulveryck/onnx-go/backend#ComputationBackend)%20interface)
.
The model outputs a tensor. This tensor holds all pieces of information required to extract bounding boxes. Getting the bounding boxes is the responsibility of the application. Therefore, the package gofaces
defines a [Box](https://godoc.org/github.com/owulveryck/gofaces#Box)
structure.
A box contains a set of [Elements](https://godoc.org/github.com/owulveryck/gofaces#Element)
The application’s goal is to analyze the picture and to provide the bounding boxes that contain faces. What the actor needs are the resulting bounding boxes. The application provides them via a call to the [ProcessOutput](https://godoc.org/github.com/owulveryck/gofaces#ProcessOutput)
function.
Note On top of this function, I include a function to [Sanitize](https://godoc.org/github.com/owulveryck/gofaces#Sanitize)
the results (which could be in a separate package though because it is part of the post-processing).
Final result
You can find the code of the application in my [gofaces](https://github.com/owulveryck/gofaces)
repository.
The repository is composed of:
gofaces
package which is at the root level (see the godoc here;cmd
subdirectory is holding a sample implementation to analyze the picture in the command line.I am using a famous meme as input.
cd $GOPATH/src/github.com/owulveryck/gofaces/cmd
go run main.go \
-img /tmp/meme.jpg \
-model ../model/model.onnx
gives the following result
[At (187,85)-(251,147) (confidence 0.20):
- face - 1
]
It has detected only one face; It is possible to play with the confidence threshold to detect other faces. I have found that it is not possible to detect the face of the lover; probably because the picture does not show her full face.
It is not the responsibility of the gofaces
package to generate a picture; its goal is to detect faces only. I have included in the repository another package, [draw](https://godoc.org/github.com/owulveryck/gofaces/draw)
. This package contains a single exported function. This function generates a Go image.Image
with a transparent background and add the rectangles of the boxes.
I tweaked the primary tool to add an -output
flag (in the main
package). It writes a png file you can combine it with the original picture in post-processing.
Here is an example of post processing with ImageMagick.
YOLO_CONFIDENCE_THRESHOLD=0.1 go run main.go \
-img /tmp/meme.jpg \
-output /tmp/mask2.png \
-model ../model/model.onnx
convert \
/tmp/meme.jpg \
/tmp/mask2.png \
\( -resize 418x \) \
-compose over -composite /tmp/result2.png
Conclusion
Alongside this article, we made a tool by writing three testable packages (gofaces
, draw
and, obviously, main
).
The Go self-contained binary makes it the right choice for playing with face detection on personal computers. On top of that, It is easy, for a developer, to adapt the tool by tweaking only the main
package. He can use face detection to write the funniest or fanciest tool. The sky is the limit.
Thanks to the ONNX Intermediate Representation (IR), it is now possible to use machine learning to describe part of the business logic of a tool. Third-party implementations of the ONNX format allows writing efficient applications with different frameworks or runtime environments.
What I like the most with this idea is that we have a separation of concerns for building a modular and testable tool. Each part can have its lifecycle as long as they still fulfill the interfaces.
#python #go
1646698200
What is face recognition? Or what is recognition? When you look at an apple fruit, your mind immediately tells you that this is an apple fruit. This process, your mind telling you that this is an apple fruit is recognition in simple words. So what is face recognition then? I am sure you have guessed it right. When you look at your friend walking down the street or a picture of him, you recognize that he is your friend Paulo. Interestingly when you look at your friend or a picture of him you look at his face first before looking at anything else. Ever wondered why you do that? This is so that you can recognize him by looking at his face. Well, this is you doing face recognition.
But the real question is how does face recognition works? It is quite simple and intuitive. Take a real life example, when you meet someone first time in your life you don't recognize him, right? While he talks or shakes hands with you, you look at his face, eyes, nose, mouth, color and overall look. This is your mind learning or training for the face recognition of that person by gathering face data. Then he tells you that his name is Paulo. At this point your mind knows that the face data it just learned belongs to Paulo. Now your mind is trained and ready to do face recognition on Paulo's face. Next time when you will see Paulo or his face in a picture you will immediately recognize him. This is how face recognition work. The more you will meet Paulo, the more data your mind will collect about Paulo and especially his face and the better you will become at recognizing him.
Now the next question is how to code face recognition with OpenCV, after all this is the only reason why you are reading this article, right? OK then. You might say that our mind can do these things easily but to actually code them into a computer is difficult? Don't worry, it is not. Thanks to OpenCV, coding face recognition is as easier as it feels. The coding steps for face recognition are same as we discussed it in real life example above.
OpenCV comes equipped with built in face recognizer, all you have to do is feed it the face data. It's that simple and this how it will look once we are done coding it.
OpenCV has three built in face recognizers and thanks to OpenCV's clean coding, you can use any of them by just changing a single line of code. Below are the names of those face recognizers and their OpenCV calls.
cv2.face.createEigenFaceRecognizer()
cv2.face.createFisherFaceRecognizer()
cv2.face.createLBPHFaceRecognizer()
We have got three face recognizers but do you know which one to use and when? Or which one is better? I guess not. So why not go through a brief summary of each, what you say? I am assuming you said yes :) So let's dive into the theory of each.
This algorithm considers the fact that not all parts of a face are equally important and equally useful. When you look at some one you recognize him/her by his distinct features like eyes, nose, cheeks, forehead and how they vary with respect to each other. So you are actually focusing on the areas of maximum change (mathematically speaking, this change is variance) of the face. For example, from eyes to nose there is a significant change and same is the case from nose to mouth. When you look at multiple faces you compare them by looking at these parts of the faces because these parts are the most useful and important components of a face. Important because they catch the maximum change among faces, change the helps you differentiate one face from the other. This is exactly how EigenFaces face recognizer works.
EigenFaces face recognizer looks at all the training images of all the persons as a whole and try to extract the components which are important and useful (the components that catch the maximum variance/change) and discards the rest of the components. This way it not only extracts the important components from the training data but also saves memory by discarding the less important components. These important components it extracts are called principal components. Below is an image showing the principal components extracted from a list of faces.
Principal Components source
You can see that principal components actually represent faces and these faces are called eigen faces and hence the name of the algorithm.
So this is how EigenFaces face recognizer trains itself (by extracting principal components). Remember, it also keeps a record of which principal component belongs to which person. One thing to note in above image is that Eigenfaces algorithm also considers illumination as an important component.
Later during recognition, when you feed a new image to the algorithm, it repeats the same process on that image as well. It extracts the principal component from that new image and compares that component with the list of components it stored during training and finds the component with the best match and returns the person label associated with that best match component.
Easy peasy, right? Next one is easier than this one.
This algorithm is an improved version of EigenFaces face recognizer. Eigenfaces face recognizer looks at all the training faces of all the persons at once and finds principal components from all of them combined. By capturing principal components from all the of them combined you are not focusing on the features that discriminate one person from the other but the features that represent all the persons in the training data as a whole.
This approach has drawbacks, for example, images with sharp changes (like light changes which is not a useful feature at all) may dominate the rest of the images and you may end up with features that are from external source like light and are not useful for discrimination at all. In the end, your principal components will represent light changes and not the actual face features.
Fisherfaces algorithm, instead of extracting useful features that represent all the faces of all the persons, it extracts useful features that discriminate one person from the others. This way features of one person do not dominate over the others and you have the features that discriminate one person from the others.
Below is an image of features extracted using Fisherfaces algorithm.
Fisher Faces source
You can see that features extracted actually represent faces and these faces are called fisher faces and hence the name of the algorithm.
One thing to note here is that even in Fisherfaces algorithm if multiple persons have images with sharp changes due to external sources like light they will dominate over other features and affect recognition accuracy.
Getting bored with this theory? Don't worry, only one face recognizer is left and then we will dive deep into the coding part.
I wrote a detailed explaination on Local Binary Patterns Histograms in my previous article on face detection using local binary patterns histograms. So here I will just give a brief overview of how it works.
We know that Eigenfaces and Fisherfaces are both affected by light and in real life we can't guarantee perfect light conditions. LBPH face recognizer is an improvement to overcome this drawback.
Idea is to not look at the image as a whole instead find the local features of an image. LBPH alogrithm try to find the local structure of an image and it does that by comparing each pixel with its neighboring pixels.
Take a 3x3 window and move it one image, at each move (each local part of an image), compare the pixel at the center with its neighbor pixels. The neighbors with intensity value less than or equal to center pixel are denoted by 1 and others by 0. Then you read these 0/1 values under 3x3 window in a clockwise order and you will have a binary pattern like 11100011 and this pattern is local to some area of the image. You do this on whole image and you will have a list of local binary patterns.
LBP Labeling
Now you get why this algorithm has Local Binary Patterns in its name? Because you get a list of local binary patterns. Now you may be wondering, what about the histogram part of the LBPH? Well after you get a list of local binary patterns, you convert each binary pattern into a decimal number (as shown in above image) and then you make a histogram of all of those values. A sample histogram looks like this.
Sample Histogram
I guess this answers the question about histogram part. So in the end you will have one histogram for each face image in the training data set. That means if there were 100 images in training data set then LBPH will extract 100 histograms after training and store them for later recognition. Remember, algorithm also keeps track of which histogram belongs to which person.
Later during recognition, when you will feed a new image to the recognizer for recognition it will generate a histogram for that new image, compare that histogram with the histograms it already has, find the best match histogram and return the person label associated with that best match histogram.
Below is a list of faces and their respective local binary patterns images. You can see that the LBP images are not affected by changes in light conditions.
LBP Faces source
The theory part is over and now comes the coding part! Ready to dive into coding? Let's get into it then.
Coding Face Recognition with OpenCV
The Face Recognition process in this tutorial is divided into three steps.
[There should be a visualization diagram for above steps here]
To detect faces, I will use the code from my previous article on face detection. So if you have not read it, I encourage you to do so to understand how face detection works and its Python coding.
Before starting the actual coding we need to import the required modules for coding. So let's import them first.
#import OpenCV module
import cv2
#import os module for reading training data directories and paths
import os
#import numpy to convert python lists to numpy arrays as
#it is needed by OpenCV face recognizers
import numpy as np
#matplotlib for display our images
import matplotlib.pyplot as plt
%matplotlib inline
The more images used in training the better. Normally a lot of images are used for training a face recognizer so that it can learn different looks of the same person, for example with glasses, without glasses, laughing, sad, happy, crying, with beard, without beard etc. To keep our tutorial simple we are going to use only 12 images for each person.
So our training data consists of total 2 persons with 12 images of each person. All training data is inside training-data
folder. training-data
folder contains one folder for each person and each folder is named with format sLabel (e.g. s1, s2)
where label is actually the integer label assigned to that person. For example folder named s1 means that this folder contains images for person 1. The directory structure tree for training data is as follows:
training-data
|-------------- s1
| |-- 1.jpg
| |-- ...
| |-- 12.jpg
|-------------- s2
| |-- 1.jpg
| |-- ...
| |-- 12.jpg
The test-data
folder contains images that we will use to test our face recognizer after it has been successfully trained.
As OpenCV face recognizer accepts labels as integers so we need to define a mapping between integer labels and persons actual names so below I am defining a mapping of persons integer labels and their respective names.
Note: As we have not assigned label 0
to any person so the mapping for label 0 is empty.
#there is no label 0 in our training data so subject name for index/label 0 is empty
subjects = ["", "Tom Cruise", "Shahrukh Khan"]
You may be wondering why data preparation, right? Well, OpenCV face recognizer accepts data in a specific format. It accepts two vectors, one vector is of faces of all the persons and the second vector is of integer labels for each face so that when processing a face the face recognizer knows which person that particular face belongs too.
For example, if we had 2 persons and 2 images for each person.
PERSON-1 PERSON-2
img1 img1
img2 img2
Then the prepare data step will produce following face and label vectors.
FACES LABELS
person1_img1_face 1
person1_img2_face 1
person2_img1_face 2
person2_img2_face 2
Preparing data step can be further divided into following sub-steps.
s1, s2
.sLabel
where Label
is an integer representing the label we have assigned to that subject. So for example, folder name s1
means that the subject has label 1, s2 means subject label is 2 and so on. The label extracted in this step is assigned to each face detected in the next step.[There should be a visualization for above steps here]
Did you read my last article on face detection? No? Then you better do so right now because to detect faces, I am going to use the code from my previous article on face detection. So if you have not read it, I encourage you to do so to understand how face detection works and its coding. Below is the same code.
#function to detect face using OpenCV
def detect_face(img):
#convert the test image to gray image as opencv face detector expects gray images
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#load OpenCV face detector, I am using LBP which is fast
#there is also a more accurate but slow Haar classifier
face_cascade = cv2.CascadeClassifier('opencv-files/lbpcascade_frontalface.xml')
#let's detect multiscale (some images may be closer to camera than others) images
#result is a list of faces
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5);
#if no faces are detected then return original img
if (len(faces) == 0):
return None, None
#under the assumption that there will be only one face,
#extract the face area
(x, y, w, h) = faces[0]
#return only the face part of the image
return gray[y:y+w, x:x+h], faces[0]
I am using OpenCV's LBP face detector. On line 4, I convert the image to grayscale because most operations in OpenCV are performed in gray scale, then on line 8 I load LBP face detector using cv2.CascadeClassifier
class. After that on line 12 I use cv2.CascadeClassifier
class' detectMultiScale
method to detect all the faces in the image. on line 20, from detected faces I only pick the first face because in one image there will be only one face (under the assumption that there will be only one prominent face). As faces returned by detectMultiScale
method are actually rectangles (x, y, width, height) and not actual faces images so we have to extract face image area from the main image. So on line 23 I extract face area from gray image and return both the face image area and face rectangle.
Now you have got a face detector and you know the 4 steps to prepare the data, so are you ready to code the prepare data step? Yes? So let's do it.
#this function will read all persons' training images, detect face from each image
#and will return two lists of exactly same size, one list
# of faces and another list of labels for each face
def prepare_training_data(data_folder_path):
#------STEP-1--------
#get the directories (one directory for each subject) in data folder
dirs = os.listdir(data_folder_path)
#list to hold all subject faces
faces = []
#list to hold labels for all subjects
labels = []
#let's go through each directory and read images within it
for dir_name in dirs:
#our subject directories start with letter 's' so
#ignore any non-relevant directories if any
if not dir_name.startswith("s"):
continue;
#------STEP-2--------
#extract label number of subject from dir_name
#format of dir name = slabel
#, so removing letter 's' from dir_name will give us label
label = int(dir_name.replace("s", ""))
#build path of directory containin images for current subject subject
#sample subject_dir_path = "training-data/s1"
subject_dir_path = data_folder_path + "/" + dir_name
#get the images names that are inside the given subject directory
subject_images_names = os.listdir(subject_dir_path)
#------STEP-3--------
#go through each image name, read image,
#detect face and add face to list of faces
for image_name in subject_images_names:
#ignore system files like .DS_Store
if image_name.startswith("."):
continue;
#build image path
#sample image path = training-data/s1/1.pgm
image_path = subject_dir_path + "/" + image_name
#read image
image = cv2.imread(image_path)
#display an image window to show the image
cv2.imshow("Training on image...", image)
cv2.waitKey(100)
#detect face
face, rect = detect_face(image)
#------STEP-4--------
#for the purpose of this tutorial
#we will ignore faces that are not detected
if face is not None:
#add face to list of faces
faces.append(face)
#add label for this face
labels.append(label)
cv2.destroyAllWindows()
cv2.waitKey(1)
cv2.destroyAllWindows()
return faces, labels
I have defined a function that takes the path, where training subjects' folders are stored, as parameter. This function follows the same 4 prepare data substeps mentioned above.
(step-1) On line 8 I am using os.listdir
method to read names of all folders stored on path passed to function as parameter. On line 10-13 I am defining labels and faces vectors.
(step-2) After that I traverse through all subjects' folder names and from each subject's folder name on line 27 I am extracting the label information. As folder names follow the sLabel
naming convention so removing the letter s
from folder name will give us the label assigned to that subject.
(step-3) On line 34, I read all the images names of of the current subject being traversed and on line 39-66 I traverse those images one by one. On line 53-54 I am using OpenCV's imshow(window_title, image)
along with OpenCV's waitKey(interval)
method to display the current image being traveresed. The waitKey(interval)
method pauses the code flow for the given interval (milliseconds), I am using it with 100ms interval so that we can view the image window for 100ms. On line 57, I detect face from the current image being traversed.
(step-4) On line 62-66, I add the detected face and label to their respective vectors.
But a function can't do anything unless we call it on some data that it has to prepare, right? Don't worry, I have got data of two beautiful and famous celebrities. I am sure you will recognize them!
Let's call this function on images of these beautiful celebrities to prepare data for training of our Face Recognizer. Below is a simple code to do that.
#let's first prepare our training data
#data will be in two lists of same size
#one list will contain all the faces
#and other list will contain respective labels for each face
print("Preparing data...")
faces, labels = prepare_training_data("training-data")
print("Data prepared")
#print total faces and labels
print("Total faces: ", len(faces))
print("Total labels: ", len(labels))
Preparing data...
Data prepared
Total faces: 23
Total labels: 23
This was probably the boring part, right? Don't worry, the fun stuff is coming up next. It's time to train our own face recognizer so that once trained it can recognize new faces of the persons it was trained on. Read? Ok then let's train our face recognizer.
As we know, OpenCV comes equipped with three face recognizers.
cv2.face.createEigenFaceRecognizer()
cv2.face.createFisherFaceRecognizer()
cv2.face.LBPHFisherFaceRecognizer()
I am going to use LBPH face recognizer but you can use any face recognizer of your choice. No matter which of the OpenCV's face recognizer you use the code will remain the same. You just have to change one line, the face recognizer initialization line given below.
#create our LBPH face recognizer
face_recognizer = cv2.face.createLBPHFaceRecognizer()
#or use EigenFaceRecognizer by replacing above line with
#face_recognizer = cv2.face.createEigenFaceRecognizer()
#or use FisherFaceRecognizer by replacing above line with
#face_recognizer = cv2.face.createFisherFaceRecognizer()
Now that we have initialized our face recognizer and we also have prepared our training data, it's time to train the face recognizer. We will do that by calling the train(faces-vector, labels-vector)
method of face recognizer.
#train our face recognizer of our training faces
face_recognizer.train(faces, np.array(labels))
Did you notice that instead of passing labels
vector directly to face recognizer I am first converting it to numpy array? This is because OpenCV expects labels vector to be a numpy
array.
Still not satisfied? Want to see some action? Next step is the real action, I promise!
Now comes my favorite part, the prediction part. This is where we actually get to see if our algorithm is actually recognizing our trained subjects's faces or not. We will take two test images of our celeberities, detect faces from each of them and then pass those faces to our trained face recognizer to see if it recognizes them.
Below are some utility functions that we will use for drawing bounding box (rectangle) around face and putting celeberity name near the face bounding box.
#function to draw rectangle on image
#according to given (x, y) coordinates and
#given width and heigh
def draw_rectangle(img, rect):
(x, y, w, h) = rect
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
#function to draw text on give image starting from
#passed (x, y) coordinates.
def draw_text(img, text, x, y):
cv2.putText(img, text, (x, y), cv2.FONT_HERSHEY_PLAIN, 1.5, (0, 255, 0), 2)
First function draw_rectangle
draws a rectangle on image based on passed rectangle coordinates. It uses OpenCV's built in function cv2.rectangle(img, topLeftPoint, bottomRightPoint, rgbColor, lineWidth)
to draw rectangle. We will use it to draw a rectangle around the face detected in test image.
Second function draw_text
uses OpenCV's built in function cv2.putText(img, text, startPoint, font, fontSize, rgbColor, lineWidth)
to draw text on image.
Now that we have the drawing functions, we just need to call the face recognizer's predict(face)
method to test our face recognizer on test images. Following function does the prediction for us.
#this function recognizes the person in image passed
#and draws a rectangle around detected face with name of the
#subject
def predict(test_img):
#make a copy of the image as we don't want to chang original image
img = test_img.copy()
#detect face from the image
face, rect = detect_face(img)
#predict the image using our face recognizer
label= face_recognizer.predict(face)
#get name of respective label returned by face recognizer
label_text = subjects[label]
#draw a rectangle around face detected
draw_rectangle(img, rect)
#draw name of predicted person
draw_text(img, label_text, rect[0], rect[1]-5)
return img
predict(face)
method. This method will return a lableNow that we have the prediction function well defined, next step is to actually call this function on our test images and display those test images to see if our face recognizer correctly recognized them. So let's do it. This is what we have been waiting for.
print("Predicting images...")
#load test images
test_img1 = cv2.imread("test-data/test1.jpg")
test_img2 = cv2.imread("test-data/test2.jpg")
#perform a prediction
predicted_img1 = predict(test_img1)
predicted_img2 = predict(test_img2)
print("Prediction complete")
#create a figure of 2 plots (one for each test image)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
#display test image1 result
ax1.imshow(cv2.cvtColor(predicted_img1, cv2.COLOR_BGR2RGB))
#display test image2 result
ax2.imshow(cv2.cvtColor(predicted_img2, cv2.COLOR_BGR2RGB))
#display both images
cv2.imshow("Tom cruise test", predicted_img1)
cv2.imshow("Shahrukh Khan test", predicted_img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.waitKey(1)
cv2.destroyAllWindows()
Predicting images...
Prediction complete
wohooo! Is'nt it beautiful? Indeed, it is!
Face Recognition is a fascinating idea to work on and OpenCV has made it extremely simple and easy for us to code it. It just takes a few lines of code to have a fully working face recognition application and we can switch between all three face recognizers with a single line of code change. It's that simple.
Although EigenFaces, FisherFaces and LBPH face recognizers are good but there are even better ways to perform face recognition like using Histogram of Oriented Gradients (HOGs) and Neural Networks. So the more advanced face recognition algorithms are now a days implemented using a combination of OpenCV and Machine learning. I have plans to write some articles on those more advanced methods as well, so stay tuned!
Download Details:
Author: informramiz
Source Code: https://github.com/informramiz/opencv-face-recognition-python
License: MIT License
1672193100
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
, Dlib
and SFace
.
Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.
The easiest way to install deepface is to download it from PyPI
. It's going to install the library itself and its prerequisites as well.
$ pip install deepface
DeepFace is also available at Conda
. You can alternatively install the package via conda.
$ conda install -c conda-forge deepface
Then you will be able to import the library and use its functionalities.
from deepface import DeepFace
Facial Recognition - Demo
A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.
Face Verification - Demo
This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or base64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")
Face recognition - Demo
Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")
Embeddings
Face recognition models basically represent facial images as multi-dimensional vectors. Sometimes, you need those embedding vectors directly. DeepFace comes with a dedicated representation function.
embedding = DeepFace.represent(img_path = "img.jpg")
This function returns an array as output. The size of the output array would be different based on the model name. For instance, VGG-Face is the default model for deepface and it represents facial images as 2622 dimensional vectors.
assert isinstance(embedding, list)
assert model_name = "VGG-Face" and len(embedding) == 2622
Here, embedding is also plotted with 2622 slots horizontally. Each slot is corresponding to a dimension value in the embedding vector and dimension value is explained in the colorbar on the right. Similar to 2D barcodes, vertical dimension stores no information in the illustration.
Face recognition models - Demo
Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
, Dlib
and SFace
. The default configuration uses VGG-Face model.
models = [
"VGG-Face",
"Facenet",
"Facenet512",
"OpenFace",
"DeepFace",
"DeepID",
"ArcFace",
"Dlib",
"SFace",
]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg",
img2_path = "img2.jpg",
model_name = models[1]
)
#face recognition
df = DeepFace.find(img_path = "img1.jpg",
db_path = "C:/workspace/my_db",
model_name = models[1]
)
#embeddings
embedding = DeepFace.represent(img_path = "img.jpg",
model_name = models[1]
)
FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments. You can find out the scores of those models below on both Labeled Faces in the Wild and YouTube Faces in the Wild data sets declared by its creators.
Model | LFW Score | YTF Score |
---|---|---|
Facenet512 | 99.65% | - |
SFace | 99.60% | - |
ArcFace | 99.41% | - |
Dlib | 99.38 % | - |
Facenet | 99.20% | - |
VGG-Face | 98.78% | 97.40% |
Human-beings | 97.53% | - |
OpenFace | 93.80% | - |
DeepID | - | 97.05% |
Similarity
Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.
Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.
metrics = ["cosine", "euclidean", "euclidean_l2"]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg",
img2_path = "img2.jpg",
distance_metric = metrics[1]
)
#face recognition
df = DeepFace.find(img_path = "img1.jpg",
db_path = "C:/workspace/my_db",
distance_metric = metrics[1]
)
Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.
Facial Attribute Analysis - Demo
Deepface also comes with a strong facial attribute analysis module including age
, gender
, facial expression
(including angry, fear, neutral, sad, disgust, happy and surprise) and race
(including asian, white, middle eastern, indian, latino and black) predictions.
obj = DeepFace.analyze(img_path = "img4.jpg",
actions = ['age', 'gender', 'race', 'emotion']
)
Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.
Face Detectors - Demo
Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV
, SSD
, Dlib
, MTCNN
, RetinaFace
and MediaPipe
detectors are wrapped in deepface.
All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.
backends = [
'opencv',
'ssd',
'dlib',
'mtcnn',
'retinaface',
'mediapipe'
]
#face verification
obj = DeepFace.verify(img1_path = "img1.jpg",
img2_path = "img2.jpg",
detector_backend = backends[4]
)
#face recognition
df = DeepFace.find(img_path = "img.jpg",
db_path = "my_db",
detector_backend = backends[4]
)
#embeddings
embedding = DeepFace.represent(img_path = "img.jpg",
detector_backend = backends[4]
)
#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg",
detector_backend = backends[4]
)
#face detection and alignment
face = DeepFace.detectFace(img_path = "img.jpg",
target_size = (224, 224),
detector_backend = backends[4]
)
Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.
RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.
The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.
You can find out more about RetinaFace on this repo.
Real Time Analysis - Demo
You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequentially 5 frames. Then, it shows results 5 seconds.
DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")
Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.
user
├── database
│ ├── Alice
│ │ ├── Alice1.jpg
│ │ ├── Alice2.jpg
│ ├── Bob
│ │ ├── Bob.jpg
API - Demo
Deepface serves an API as well. You can clone /api/api.py
and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.
python api.py
Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify
for face recognition, http://127.0.0.1:5000/analyze
for facial attribute analysis, and http://127.0.0.1:5000/represent
for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.
Command Line Interface
DeepFace comes with a command line interface as well. You are able to access its functions in command line as shown below. The command deepface expects the function name as 1st argument and function arguments thereafter.
#face verification
$ deepface verify -img1_path tests/dataset/img1.jpg -img2_path tests/dataset/img2.jpg
#facial analysis
$ deepface analyze -img_path tests/dataset/img1.jpg
Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.
Pull requests are welcome! You should run the unit tests locally by running test/unit_tests.py
. Once a PR sent, GitHub test workflow will be run automatically and unit test results will be available in GitHub actions before approval.
There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏
You can also support this work on Patreon
Please cite deepface in your publications if it helps your research. Here are its BibTex entries:
If you use deepface for facial recogntion purposes, please cite the this publication.
@inproceedings{serengil2020lightface,
title = {LightFace: A Hybrid Deep Face Recognition Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
pages = {23-27},
year = {2020},
doi = {10.1109/ASYU50717.2020.9259802},
url = {https://doi.org/10.1109/ASYU50717.2020.9259802},
organization = {IEEE}
}
If you use deepface for facial attribute analysis purposes such as age, gender, emotion or ethnicity prediction, please cite the this publication.
@inproceedings{serengil2021lightface,
title = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
pages = {1-4},
year = {2021},
doi = {10.1109/ICEET53442.2021.9659697},
url = {https://doi.org/10.1109/ICEET53442.2021.9659697},
organization = {IEEE}
}
Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt
.
Author: Serengil
Source Code: https://github.com/serengil/deepface
License: MIT license
1648217849
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
and Dlib
.
Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.
The easiest way to install deepface is to download it from PyPI
. It's going to install the library itself and its prerequisites as well. The library is mainly powered by TensorFlow and Keras.
pip install deepface
Then you will be able to import the library and use its functionalities.
from deepface import DeepFace
Facial Recognition - Demo
A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.
Face Verification - Demo
This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or based64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")
Face recognition - Demo
Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")
Face recognition models - Demo
Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
and Dlib
. The default configuration uses VGG-Face model.
models = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib"]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", model_name = models[1])
#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_name = models[1])
FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments. You can find out the scores of those models below on both Labeled Faces in the Wild and YouTube Faces in the Wild data sets declared by its creators.
Model | LFW Score | YTF Score |
---|---|---|
Facenet512 | 99.65% | - |
ArcFace | 99.41% | - |
Dlib | 99.38 % | - |
Facenet | 99.20% | - |
VGG-Face | 98.78% | 97.40% |
Human-beings | 97.53% | - |
OpenFace | 93.80% | - |
DeepID | - | 97.05% |
Similarity
Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.
Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.
metrics = ["cosine", "euclidean", "euclidean_l2"]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", distance_metric = metrics[1])
#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", distance_metric = metrics[1])
Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.
Facial Attribute Analysis - Demo
Deepface also comes with a strong facial attribute analysis module including age
, gender
, facial expression
(including angry, fear, neutral, sad, disgust, happy and surprise) and race
(including asian, white, middle eastern, indian, latino and black) predictions.
obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])
Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.
Streaming and Real Time Analysis - Demo
You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.
DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")
Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.
user
├── database
│ ├── Alice
│ │ ├── Alice1.jpg
│ │ ├── Alice2.jpg
│ ├── Bob
│ │ ├── Bob.jpg
Face Detectors - Demo
Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV
, SSD
, Dlib
, MTCNN
, RetinaFace
and MediaPipe
detectors are wrapped in deepface.
All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.
backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface', 'mediapipe']
#face verification
obj = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", detector_backend = backends[4])
#face recognition
df = DeepFace.find(img_path = "img.jpg", db_path = "my_db", detector_backend = backends[4])
#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg", detector_backend = backends[4])
#face detection and alignment
face = DeepFace.detectFace(img_path = "img.jpg", target_size = (224, 224), detector_backend = backends[4])
Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.
RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.
The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.
You can find out more about RetinaFace on this repo.
API - Demo
Deepface serves an API as well. You can clone /api/api.py
and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.
python api.py
Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify
for face recognition, http://127.0.0.1:5000/analyze
for facial attribute analysis, and http://127.0.0.1:5000/represent
for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.
Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Herein, deepface offers a represention function to find vector embeddings from facial images.
embedding = DeepFace.represent(img_path = "img.jpg", model_name = 'Facenet')
Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.
Pull requests are welcome. You should run the unit tests locally by running test/unit_tests.py
. Please share the unit test result logs in the PR. Deepface is currently compatible with TF 1 and 2 versions. Change requests should satisfy those requirements both.
There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏
You can also support this work on Patreon
Please cite deepface in your publications if it helps your research. Here are BibTeX entries:
@inproceedings{serengil2020lightface,
title = {LightFace: A Hybrid Deep Face Recognition Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
pages = {23-27},
year = {2020},
doi = {10.1109/ASYU50717.2020.9259802},
url = {https://doi.org/10.1109/ASYU50717.2020.9259802},
organization = {IEEE}
}
@inproceedings{serengil2021lightface,
title = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
pages = {1-4},
year = {2021},
doi = {10.1109/ICEET53442.2021.9659697},
url = {https://doi.org/10.1109/ICEET53442.2021.9659697},
organization = {IEEE}
}
Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt.
Download Details:
Author: serengil
Source Code: https://github.com/serengil/deepface
License: MIT License
1641812160
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
and Dlib
.
Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.
The easiest way to install deepface is to download it from PyPI
. It's going to install the library itself and its prerequisites as well. The library is mainly based on TensorFlow and Keras.
pip install deepface
Then you will be able to import the library and use its functionalities.
from deepface import DeepFace
Facial Recognition - Demo
A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.
Face Verification - Demo
This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or based64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")
Face recognition - Demo
Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")
Face recognition models - Demo
Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
and Dlib
. The default configuration uses VGG-Face model.
models = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib"]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", model_name = models[1])
#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_name = models[1])
FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments. You can find out the scores of those models below on both Labeled Faces in the Wild and YouTube Faces in the Wild data sets declared by its creators.
Model | LFW Score | YTF Score |
---|---|---|
Facenet512 | 99.65% | - |
ArcFace | 99.41% | - |
Dlib | 99.38 % | - |
Facenet | 99.20% | - |
VGG-Face | 98.78% | 97.40% |
Human-beings | 97.53% | - |
OpenFace | 93.80% | - |
DeepID | - | 97.05% |
Similarity
Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.
Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.
metrics = ["cosine", "euclidean", "euclidean_l2"]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", distance_metric = metrics[1])
#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", distance_metric = metrics[1])
Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.
Facial Attribute Analysis - Demo
Deepface also comes with a strong facial attribute analysis module including age
, gender
, facial expression
(including angry, fear, neutral, sad, disgust, happy and surprise) and race
(including asian, white, middle eastern, indian, latino and black) predictions.
obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])
Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.
Streaming and Real Time Analysis - Demo
You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.
DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")
Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.
user
├── database
│ ├── Alice
│ │ ├── Alice1.jpg
│ │ ├── Alice2.jpg
│ ├── Bob
│ │ ├── Bob.jpg
Face Detectors - Demo
Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV
, SSD
, Dlib
, MTCNN
and RetinaFace
detectors are wrapped in deepface.
All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.
backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface']
#face verification
obj = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", detector_backend = backends[4])
#face recognition
df = DeepFace.find(img_path = "img.jpg", db_path = "my_db", detector_backend = backends[4])
#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg", detector_backend = backends[4])
#face detection and alignment
face = DeepFace.detectFace(img_path = "img.jpg", target_size = (224, 224), detector_backend = backends[4])
Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.
RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.
The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.
You can find out more about RetinaFace on this repo.
API - Demo
Deepface serves an API as well. You can clone /api/api.py
and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.
python api.py
Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify
for face recognition, http://127.0.0.1:5000/analyze
for facial attribute analysis, and http://127.0.0.1:5000/represent
for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.
Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Herein, deepface offers a represention function to find vector embeddings from facial images.
embedding = DeepFace.represent(img_path = "img.jpg", model_name = 'Facenet')
Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.
Pull requests are welcome. You should run the unit tests locally by running test/unit_tests.py
. Please share the unit test result logs in the PR. Deepface is currently compatible with TF 1 and 2 versions. Change requests should satisfy those requirements both.
There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏
You can also support this work on Patreon
Please cite deepface in your publications if it helps your research. Here are its BibTeX entries:
@inproceedings{serengil2020lightface,
title = {LightFace: A Hybrid Deep Face Recognition Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
pages = {23-27},
year = {2020},
doi = {10.1109/ASYU50717.2020.9259802},
url = {https://doi.org/10.1109/ASYU50717.2020.9259802},
organization = {IEEE}
}
@inproceedings{serengil2021lightface,
title = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
pages = {1-4},
year = {2021},
doi = {10.1109/ICEET53442.2021.9659697},
url. = {https://doi.org/10.1109/ICEET53442.2021.9659697},
organization = {IEEE}
}
Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt.
Author: Serengil
Source Code: https://github.com/serengil/deepface
License: MIT License
1626775355
No programming language is pretty much as diverse as Python. It enables building cutting edge applications effortlessly. Developers are as yet investigating the full capability of end-to-end Python development services in various areas.
By areas, we mean FinTech, HealthTech, InsureTech, Cybersecurity, and that's just the beginning. These are New Economy areas, and Python has the ability to serve every one of them. The vast majority of them require massive computational abilities. Python's code is dynamic and powerful - equipped for taking care of the heavy traffic and substantial algorithmic capacities.
Programming advancement is multidimensional today. Endeavor programming requires an intelligent application with AI and ML capacities. Shopper based applications require information examination to convey a superior client experience. Netflix, Trello, and Amazon are genuine instances of such applications. Python assists with building them effortlessly.
Python can do such numerous things that developers can't discover enough reasons to admire it. Python application development isn't restricted to web and enterprise applications. It is exceptionally adaptable and superb for a wide range of uses.
Robust frameworks
Python is known for its tools and frameworks. There's a structure for everything. Django is helpful for building web applications, venture applications, logical applications, and mathematical processing. Flask is another web improvement framework with no conditions.
Web2Py, CherryPy, and Falcon offer incredible capabilities to customize Python development services. A large portion of them are open-source frameworks that allow quick turn of events.
Simple to read and compose
Python has an improved sentence structure - one that is like the English language. New engineers for Python can undoubtedly understand where they stand in the development process. The simplicity of composing allows quick application building.
The motivation behind building Python, as said by its maker Guido Van Rossum, was to empower even beginner engineers to comprehend the programming language. The simple coding likewise permits developers to roll out speedy improvements without getting confused by pointless subtleties.
Utilized by the best
Alright - Python isn't simply one more programming language. It should have something, which is the reason the business giants use it. Furthermore, that too for different purposes. Developers at Google use Python to assemble framework organization systems, parallel information pusher, code audit, testing and QA, and substantially more. Netflix utilizes Python web development services for its recommendation algorithm and media player.
Massive community support
Python has a steadily developing community that offers enormous help. From amateurs to specialists, there's everybody. There are a lot of instructional exercises, documentation, and guides accessible for Python web development solutions.
Today, numerous universities start with Python, adding to the quantity of individuals in the community. Frequently, Python designers team up on various tasks and help each other with algorithmic, utilitarian, and application critical thinking.
Progressive applications
Python is the greatest supporter of data science, Machine Learning, and Artificial Intelligence at any enterprise software development company. Its utilization cases in cutting edge applications are the most compelling motivation for its prosperity. Python is the second most well known tool after R for data analytics.
The simplicity of getting sorted out, overseeing, and visualizing information through unique libraries makes it ideal for data based applications. TensorFlow for neural networks and OpenCV for computer vision are two of Python's most well known use cases for Machine learning applications.
Thinking about the advances in programming and innovation, Python is a YES for an assorted scope of utilizations. Game development, web application development services, GUI advancement, ML and AI improvement, Enterprise and customer applications - every one of them uses Python to its full potential.
The disadvantages of Python web improvement arrangements are regularly disregarded by developers and organizations because of the advantages it gives. They focus on quality over speed and performance over blunders. That is the reason it's a good idea to utilize Python for building the applications of the future.
#python development services #python development company #python app development #python development #python in web development #python software development