Face-Mask-Detection using Deep Learning

The novel COVID-19 virus has forced us all to rethink how we live our everyday lives while keeping ourselves and others safe. Face masks have emerged as a simple and effective strategy for reducing the virus’s threat and also, application of face mask detection system are now in high demand for transportation means, densely populated areas, residential districts, large-scale manufacturers and other enterprises to ensure safety. Therefore, the goal of the today’s article is to develop a face mask detector using deep learning.

Image for post

Table of Content

  1. About Dataset
  2. Convolutional Neural Network (CNN) Architecture
  3. Training and Evaluation of CNN model
  4. Experiments and Results
  5. What’s Next?

About Dataset

The images used in the dataset are real images of people wearing mask i.e. the dataset doesn’t contain morphed masked images. The dataset consists of 3835 images belonging to two classes:

  • with_mask: 1916 images
  • without_mask: 1919 images

The images were collected from the following sources:

CNN Architecture

The convolutional neural network or convnets is a major back through in the field of deep learning. CNN is a kind of neural network, and they are widely used for image recognition and classification., They are mainly used for identifying patterns in the image. We don’t feed features into it, they identify features by themselves. The main operations of CNN are Convolution, Pooling or Sub Sampling, Non-Linearity, and Classification.

Image for post

CNN Architecture

  1. Convolution: ConvNets derive their name from the convolution operator. The primary purpose of Convolution in the case of a ConvNet is to extract features from the input image. Convolution preserves the spatial relationship between pixels by learning image features using small squares of input data.
  2. Pooling: The main purpose of pooling is to reduce the size of the input image but retains the important information. It can be done in different types like Max, Sum, Average, etc. In the case of Max Pooling, we define a window and take the largest element from that, we could also take the average (Average Pooling), the sum of all elements (Sum Pooling) within that window.
  3. **Non Linearity: **Activation function is a function that is added into a neural network to help the network learn complex patterns in the data i.e. to introduce non-linearity. For example, activation function ReLU(Rectified Linear Unit) is used to introduce non-linearity in CNN. It replaces all negative value pixels in an image to zero by performing an element-wise operation.
  4. **Fully Connected: **The Fully Connected layer is a traditional Multi-Layer Perceptron that uses a softmax activation function in the output layer (other classifiers like SVM can also be used). The term Fully Connected implies that every neuron in the previous layer is connected to every neuron on the next layer. The output from the convolutional and pooling layers represent high-level features of the input image. The purpose of the Fully Connected layer is to use these features for classifying the input image into various classes based on the training dataset.

#covid19 #deep learning

What is GEEK

Buddha Community

Face-Mask-Detection using Deep Learning
Marget D

Marget D


Top Deep Learning Development Services | Hire Deep Learning Developer

View more: https://www.inexture.com/services/deep-learning-development/

We at Inexture, strategically work on every project we are associated with. We propose a robust set of AI, ML, and DL consulting services. Our virtuoso team of data scientists and developers meticulously work on every project and add a personalized touch to it. Because we keep our clientele aware of everything being done associated with their project so there’s a sense of transparency being maintained. Leverage our services for your next AI project for end-to-end optimum services.

#deep learning development #deep learning framework #deep learning expert #deep learning ai #deep learning services

Mikel  Okuneva

Mikel Okuneva


Top 10 Deep Learning Sessions To Look Forward To At DVDC 2020

The Deep Learning DevCon 2020, DLDC 2020, has exciting talks and sessions around the latest developments in the field of deep learning, that will not only be interesting for professionals of this field but also for the enthusiasts who are willing to make a career in the field of deep learning. The two-day conference scheduled for 29th and 30th October will host paper presentations, tech talks, workshops that will uncover some interesting developments as well as the latest research and advancement of this area. Further to this, with deep learning gaining massive traction, this conference will highlight some fascinating use cases across the world.

Here are ten interesting talks and sessions of DLDC 2020 that one should definitely attend:

Also Read: Why Deep Learning DevCon Comes At The Right Time

Adversarial Robustness in Deep Learning

By Dipanjan Sarkar

**About: **Adversarial Robustness in Deep Learning is a session presented by Dipanjan Sarkar, a Data Science Lead at Applied Materials, as well as a Google Developer Expert in Machine Learning. In this session, he will focus on the adversarial robustness in the field of deep learning, where he talks about its importance, different types of adversarial attacks, and will showcase some ways to train the neural networks with adversarial realisation. Considering abstract deep learning has brought us tremendous achievements in the fields of computer vision and natural language processing, this talk will be really interesting for people working in this area. With this session, the attendees will have a comprehensive understanding of adversarial perturbations in the field of deep learning and ways to deal with them with common recipes.

Read an interview with Dipanjan Sarkar.

Imbalance Handling with Combination of Deep Variational Autoencoder and NEATER

By Divye Singh

**About: **Imbalance Handling with Combination of Deep Variational Autoencoder and NEATER is a paper presentation by Divye Singh, who has a masters in technology degree in Mathematical Modeling and Simulation and has the interest to research in the field of artificial intelligence, learning-based systems, machine learning, etc. In this paper presentation, he will talk about the common problem of class imbalance in medical diagnosis and anomaly detection, and how the problem can be solved with a deep learning framework. The talk focuses on the paper, where he has proposed a synergistic over-sampling method generating informative synthetic minority class data by filtering the noise from the over-sampled examples. Further, he will also showcase the experimental results on several real-life imbalanced datasets to prove the effectiveness of the proposed method for binary classification problems.

Default Rate Prediction Models for Self-Employment in Korea using Ridge, Random Forest & Deep Neural Network

By Dongsuk Hong

About: This is a paper presentation given by Dongsuk Hong, who is a PhD in Computer Science, and works in the big data centre of Korea Credit Information Services. This talk will introduce the attendees with machine learning and deep learning models for predicting self-employment default rates using credit information. He will talk about the study, where the DNN model is implemented for two purposes — a sub-model for the selection of credit information variables; and works for cascading to the final model that predicts default rates. Hong’s main research area is data analysis of credit information, where she is particularly interested in evaluating the performance of prediction models based on machine learning and deep learning. This talk will be interesting for the deep learning practitioners who are willing to make a career in this field.

#opinions #attend dldc 2020 #deep learning #deep learning sessions #deep learning talks #dldc 2020 #top deep learning sessions at dldc 2020 #top deep learning talks at dldc 2020

Face Mask Detector with OpenCV, Keras/Tensorflow and Deep Learning

The ongoing novel coronavirus pandemic situation is known to each and every individual. Almost every country has been affected by the devastating Coronavirus(COVID-19) disease. The pandemic threatens to reverse hard-won gains made in global health and human capital over the past decade. The mandated confinement and social distancing measures in force during an extended period of time, will make it beneficial to flatten the curve of transmission. Artificial intelligence could play an important part in the post-COVID recovery, helping to boost productivity and foster a new generation of innovative companies. While this situation happens to worsen day by day, it is very much essential for everyone to follow some rules in order to remain safe than to get badly affected with its consequences.

The frontline warriors are working hard to save many lives but it is more of an individual’s responsibility to fight one’s own battle and not compromise their health. Few ways suggested for it are: Washing hands regularly, maintaining social distance, wear face mask regularly and staying quarantined, if unwell

Here, I have tried to design a custom deep learning model of Face Mask Detector using OpenCV, Keras/Tensorflow libraries which detects if an individual is wearing a face mask or not and alerting for the same.

#face-mask #face-mask-detection #opencv #covid19 #deep-learning

Nat  Kutch

Nat Kutch


Face Landmarks Detection with Deep Learning

Have you ever thought how Snapchat manage to apply amazing filters according to your face? It have been programmed to detect some marks on your face to project a filter according to those marks. In Machine Learning those marks are known as Face Landmarks. In this article I will guide you how you can detect face Landmarks with Machine Learning.
Now, I will simply start with importing all the libraries we need for this task. I will use PyTorch in this article to face landmarks detection with Deep Learning. Let’s import all the libraries.

#deep-learning #python #face-landmarks #facedetection #machine-learning #deep learning

Noah  Rowe

Noah Rowe


Object Detection for Robots using Deep Learning

In this post, we will enable a robot named Vector to detect and recognize a large number of objects. In the end, you will see how he mentions the objects that he detected.

Who is Vector?

Vector is a cute robot, who can be your companion, and is powered by AI. He is curious, independent and also he can make you laugh with his actions. After all, you can customize it with using AI, and we will see how to make this robot detect and recognize various objects in our day to day life. If you want to know about Vector briefly, then please go through this short video.

Vector SDK

The Vector SDK gives access to various capabilities of this robot, such as computer vision, Artificial intelligence and navigation. You can design your programs to make this robot imbibed with certain AI capabilities. Before running the module, install the vector SDK by following the information on this page: https://developer.anki.com/vector/docs/index.html.

Objects detected by Vector

Object Detection using Deep Learning

To detect objects, we will be using an object detection algorithm which is trained with [Google Open Image dataset]. The network consists of a ResNet with a Region proposal network and can detect more than 600 object categories. That means **Vector **will be able to identify a large number of objects. However, we have a few more dependencies to make Vector recognize those objects. The main dependencies are based on my testing platform using python 3.6, but you can change them according to the machine in which you will be implementing.

  1. Tensorflow — 1.12.0 (you can install both CPU or GPU version)
  2. Keras-2.2.4
  3. OpenCV3

Here is a video of Vector detecting objects.

Running the Module

  1. Please clone or download this repository into your local machine. After downloading, you need to authenticate the vector robot so that the SDK can interact with Vector. To authenticate with the robot, type the following into the Terminal window.
  • python3 -m anki_vector.configure

Please note that the robot and your computer should be connected to the same network. Now, you will be asked to enter your robot’s name, IP address and serial number, which you can find in the robot itself. Also, You will be asked for your Anki login and password which you used to set up your Vector.

  1. IF you see “SUCCESS!” then your robot is connected to your computer, and you can run this module by typing.

Note: Before running this module please download the pre-trained model from here,  and put it inside the data folder.

  • python vector_objectDetection.py

You will now see the following output, where Vector is searching for objects.

Vector grabbed this picture of me posing, and he says:

I can detect Car, Computer monitor, Human face, Computer monitor, Wheel.

The picture was taken by Vector to detect objects

Now let us go through the coding part step by step

The code below recieves the picture taken by Vector and calls the object_detection module to detect and identify various objects. Once detected, the object names are send back to vector so that he can speak out.
def get_classnames(image_path):
    This function calls the object detection library to detect 600 objects
    :param image_path:
    :return: class labels
        classes = object_detection(image_path)
        if len(classes) == 0:
            return 'no objects'
        class_list = []
        for class_names in classes:

        print('Labels: {}'.format(classes))
        return ', '.join(class_list)

    except Exception as e:
          print('Exception Handled', e)

#object-detection #artificial-intelligence #deep-learning #robotics #machine-learning #deep learning