In this video, I explained what is the process we have to do recognise faces.I explain all the three process to recognise faces
And also I explained how to install the OpenCV library to perform this task.
If you want to need an explanation for this code comment that and soon I will make my next video about that.
If you have any doubts feel free to ask in the comment section.
Code & Link
https://drive.google.com/drive/folders/1noWNY_ty-2Q8eIHR9eRoPTUDpixH2Uiz?usp=sharing
Don’t forget to create the folder dataset and trainer. and keep all these files in a single folder otherwise you will get the error while running this program.
OpenCV is a Library which is used to carry out image processing using programming languages like python. This project utilizes OpenCV Library to make a Real-Time Face Detection using your webcam as a primary camera.
Following are the requirements for it:-
Approach/Algorithms used:
How to use :
# Creating database
# It captures images and stores them in datasets
# folder under the folder name of sub_data
import cv2, sys, numpy, os
haar_file = 'haarcascade_frontalface_default.xml'
# All the faces data will be
# present this folder
datasets = 'datasets'
# These are sub data sets of folder,
# for my faces I've used my name you can
# change the label here
sub_data = 'vivek'
path = os.path.join(datasets, sub_data)
if not os.path.isdir(path):
os.mkdir(path)
# defining the size of images
(width, height) = (130, 100)
#'0' is used for my webcam,
# if you've any other camera
# attached use '1' like this
face_cascade = cv2.CascadeClassifier(haar_file)
webcam = cv2.VideoCapture(0)
# The program loops until it has 30 images of the face.
count = 1
while count < 30:
(_, im) = webcam.read()
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 4)
for (x, y, w, h) in faces:
cv2.rectangle(im, (x, y), (x + w, y + h), (255, 0, 0), 2)
face = gray[y:y + h, x:x + w]
face_resize = cv2.resize(face, (width, height))
cv2.imwrite('% s/% s.png' % (path, count), face_resize)
count += 1
cv2.imshow('OpenCV', im)
key = cv2.waitKey(10)
if key == 27:
break
Following code should be run after the model has been trained for the faces :
# It helps in identifying the faces
import cv2, sys, numpy, os
size = 4
haar_file = 'haarcascade_frontalface_default.xml'
datasets = 'datasets'
# Part 1: Create fisherRecognizer
print('Recognizing Face Please Be in sufficient Lights...')
# Create a list of images and a list of corresponding names
(images, lables, names, id) = ([], [], {}, 0)
for (subdirs, dirs, files) in os.walk(datasets):
for subdir in dirs:
names[id] = subdir
subjectpath = os.path.join(datasets, subdir)
for filename in os.listdir(subjectpath):
path = subjectpath + '/' + filename
lable = id
images.append(cv2.imread(path, 0))
lables.append(int(lable))
id += 1
(width, height) = (130, 100)
# Create a Numpy array from the two lists above
(images, lables) = [numpy.array(lis) for lis in [images, lables]]
# OpenCV trains a model from the images
# NOTE FOR OpenCV2: remove '.face'
model = cv2.face.LBPHFaceRecognizer_create()
model.train(images, lables)
# Part 2: Use fisherRecognizer on camera stream
face_cascade = cv2.CascadeClassifier(haar_file)
webcam = cv2.VideoCapture(0)
while True:
(_, im) = webcam.read()
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(im, (x, y), (x + w, y + h), (255, 0, 0), 2)
face = gray[y:y + h, x:x + w]
face_resize = cv2.resize(face, (width, height))
# Try to recognize the face
prediction = model.predict(face_resize)
cv2.rectangle(im, (x, y), (x + w, y + h), (0, 255, 0), 3)
if prediction[1]<500:
cv2.putText(im, '% s - %.0f' %
(names[prediction[0]], prediction[1]), (x-10, y-10),
cv2.FONT_HERSHEY_PLAIN, 1, (0, 255, 0))
else:
cv2.putText(im, 'not recognized',
(x-10, y-10), cv2.FONT_HERSHEY_PLAIN, 1, (0, 255, 0))
cv2.imshow('OpenCV', im)
key = cv2.waitKey(10)
if key == 27:
break
Note : Above programs will not run on online IDE.
Screenshots of the Program
It may look something different because I had integrated the above program on flask framework
Running of second program yields results similar to the below image :
face detection
Datasets Storage :
data_sets
A real time face recognition system is capable of identifying or verifying a person from a video frame. To recognize the face in a frame, first you need to detect whether the face is present in the frame. If it is present, mark it as a region of interest (ROI), extract the ROI and process it for facial recognition.
This project is divided into two parts: creating a database, and training and testing.
Take pictures of the person for face recognition after running create_database.py script. It automatically creates Train folder in Database folder containing the face to be recognised. You can change the name from Train to the person’s name.
While creating the database, the face images must have different expressions, which is why a 0.38-second delay is given in the code for creating the data set. In this example, we take about 45 pictures/images and extract the face, convert it into grayscale and save it to the database folder with its name.
Training and face recognition is done next. face_rec.py code does everything. The algorithm used here is Local Binary Patterns Histograms (LBPH).
Fig. 1: Screenshot of Haar features
Face detection is the process of finding or locating one or more human faces in a frame or image. Haar-like feature algorithm by Viola and Jones is used for face detection. In Haar features, all human faces share some common properties. These regularities may be matched using Haar features, as shown in Fig. 1.
Two properties common to human faces are:
Composition of two properties forming matchable facial features are:
For example, the difference in brightness between white and black rectangles over a specific area is given by:
The above-mentioned four features matched by Haar algorithm are compared in the image of a face shown on the left of Fig. 1.
The project was tested on Ubuntu 16.04 using OpenCV 2.4.10. The following shell script installs all dependencies required for OpenCV and also install OpenCV 2.4.10.
$ sh ./install-opencv.sh
After installing OpenCV, check it in the terminal using import command, as shown in Fig. 2.
Fig. 2: Checking OpenCV using import command
Fig. 3: Creating the database
1. Create the database and run the recogniser script, as given below (also shown in Fig. 3). Make at least two data sets in the database.
$ python create_database.py person_name
2. Run the recogniser script, as given below:
$ python face_rec.py
This will start the training, and the camera will open up, as shown in Fig. 4. Accuracy depends on the number of data sets as well as the quality and lighting conditions.
Fig. 4: Screenshot of face detection
OpenCV provides the following three face recognisers:
In this project, LBPH face recognition is used, which is createLBPHFaceRecognizer( ) function.
LBP works on gray-scale images. For every pixel in a gray-scale image, a neighbourhood is selected around the current pixel and LBP value is calculated for the pixel using the neighbourhood.
After calculating LBP value of the current pixel, the corresponding pixel location is updated in the LBP mask (it is of same height and width as input image.) with LBP value calculated, as shown in Fig. 5.
Fig. 5: Screenshot of a LBPH face recogniser
In the image, there are eight neighbouring pixels. If the current pixel value is greater than or equal to the neighbouring pixel value, the corresponding bit in the binary array is set to 1. But if the current pixel value is less than the neighbouring pixel value, the corresponding bit in the binary array is set to 0.
#python #opencv #machine-learning