1623060480
Face-api.js_ is a JavaScript API for face detection and face recognition in the browser and Node.js with _TensorFlow.js
Sometimes, we have to implement face recognition or face landmark. With face-api.js, we can do this with just a few lines of code.
In the face-api.js repository, we can find examples for both browser and Node.js environments. They look simple enough for us. Let’s see what you can do with them.
In this story, we will try to implement a complete example that uses face-api.js and face landmark model to “Glassify” faces in an image.
Initial TypeScript:
$ mkdir ts-faceapi-glassify
$ code ts-faceapi-glassify ## Open directory with vscode
$ npm init -y
$ npm install typescript -D
$ npx tsc --init
TypeScript configuration file: tsconfig.json
{
"compilerOptions": {
"target": "ES2018",
"lib": [
"DOM"
],
"outDir": "./dist",
"module": "CommonJS",
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": [
"./src"
],
"exclude": [
"node_modules",
".git",
"dist"
]
}
#javascript #typescript #coding #programming
1655630160
Install via pip:
$ pip install pytumblr
Install from source:
$ git clone https://github.com/tumblr/pytumblr.git
$ cd pytumblr
$ python setup.py install
A pytumblr.TumblrRestClient
is the object you'll make all of your calls to the Tumblr API through. Creating one is this easy:
client = pytumblr.TumblrRestClient(
'<consumer_key>',
'<consumer_secret>',
'<oauth_token>',
'<oauth_secret>',
)
client.info() # Grabs the current user information
Two easy ways to get your credentials to are:
interactive_console.py
tool (if you already have a consumer key & secret)client.info() # get information about the authenticating user
client.dashboard() # get the dashboard for the authenticating user
client.likes() # get the likes for the authenticating user
client.following() # get the blogs followed by the authenticating user
client.follow('codingjester.tumblr.com') # follow a blog
client.unfollow('codingjester.tumblr.com') # unfollow a blog
client.like(id, reblogkey) # like a post
client.unlike(id, reblogkey) # unlike a post
client.blog_info(blogName) # get information about a blog
client.posts(blogName, **params) # get posts for a blog
client.avatar(blogName) # get the avatar for a blog
client.blog_likes(blogName) # get the likes on a blog
client.followers(blogName) # get the followers of a blog
client.blog_following(blogName) # get the publicly exposed blogs that [blogName] follows
client.queue(blogName) # get the queue for a given blog
client.submission(blogName) # get the submissions for a given blog
Creating posts
PyTumblr lets you create all of the various types that Tumblr supports. When using these types there are a few defaults that are able to be used with any post type.
The default supported types are described below.
We'll show examples throughout of these default examples while showcasing all the specific post types.
Creating a photo post
Creating a photo post supports a bunch of different options plus the described default options * caption - a string, the user supplied caption * link - a string, the "click-through" url for the photo * source - a string, the url for the photo you want to use (use this or the data parameter) * data - a list or string, a list of filepaths or a single file path for multipart file upload
#Creates a photo post using a source URL
client.create_photo(blogName, state="published", tags=["testing", "ok"],
source="https://68.media.tumblr.com/b965fbb2e501610a29d80ffb6fb3e1ad/tumblr_n55vdeTse11rn1906o1_500.jpg")
#Creates a photo post using a local filepath
client.create_photo(blogName, state="queue", tags=["testing", "ok"],
tweet="Woah this is an incredible sweet post [URL]",
data="/Users/johnb/path/to/my/image.jpg")
#Creates a photoset post using several local filepaths
client.create_photo(blogName, state="draft", tags=["jb is cool"], format="markdown",
data=["/Users/johnb/path/to/my/image.jpg", "/Users/johnb/Pictures/kittens.jpg"],
caption="## Mega sweet kittens")
Creating a text post
Creating a text post supports the same options as default and just a two other parameters * title - a string, the optional title for the post. Supports markdown or html * body - a string, the body of the of the post. Supports markdown or html
#Creating a text post
client.create_text(blogName, state="published", slug="testing-text-posts", title="Testing", body="testing1 2 3 4")
Creating a quote post
Creating a quote post supports the same options as default and two other parameter * quote - a string, the full text of the qote. Supports markdown or html * source - a string, the cited source. HTML supported
#Creating a quote post
client.create_quote(blogName, state="queue", quote="I am the Walrus", source="Ringo")
Creating a link post
#Create a link post
client.create_link(blogName, title="I like to search things, you should too.", url="https://duckduckgo.com",
description="Search is pretty cool when a duck does it.")
Creating a chat post
Creating a chat post supports the same options as default and two other parameters * title - a string, the title of the chat post * conversation - a string, the text of the conversation/chat, with diablog labels (no html)
#Create a chat post
chat = """John: Testing can be fun!
Renee: Testing is tedious and so are you.
John: Aw.
"""
client.create_chat(blogName, title="Renee just doesn't understand.", conversation=chat, tags=["renee", "testing"])
Creating an audio post
Creating an audio post allows for all default options and a has 3 other parameters. The only thing to keep in mind while dealing with audio posts is to make sure that you use the external_url parameter or data. You cannot use both at the same time. * caption - a string, the caption for your post * external_url - a string, the url of the site that hosts the audio file * data - a string, the filepath of the audio file you want to upload to Tumblr
#Creating an audio file
client.create_audio(blogName, caption="Rock out.", data="/Users/johnb/Music/my/new/sweet/album.mp3")
#lets use soundcloud!
client.create_audio(blogName, caption="Mega rock out.", external_url="https://soundcloud.com/skrillex/sets/recess")
Creating a video post
Creating a video post allows for all default options and has three other options. Like the other post types, it has some restrictions. You cannot use the embed and data parameters at the same time. * caption - a string, the caption for your post * embed - a string, the HTML embed code for the video * data - a string, the path of the file you want to upload
#Creating an upload from YouTube
client.create_video(blogName, caption="Jon Snow. Mega ridiculous sword.",
embed="http://www.youtube.com/watch?v=40pUYLacrj4")
#Creating a video post from local file
client.create_video(blogName, caption="testing", data="/Users/johnb/testing/ok/blah.mov")
Editing a post
Updating a post requires you knowing what type a post you're updating. You'll be able to supply to the post any of the options given above for updates.
client.edit_post(blogName, id=post_id, type="text", title="Updated")
client.edit_post(blogName, id=post_id, type="photo", data="/Users/johnb/mega/awesome.jpg")
Reblogging a Post
Reblogging a post just requires knowing the post id and the reblog key, which is supplied in the JSON of any post object.
client.reblog(blogName, id=125356, reblog_key="reblog_key")
Deleting a post
Deleting just requires that you own the post and have the post id
client.delete_post(blogName, 123456) # Deletes your post :(
A note on tags: When passing tags, as params, please pass them as a list (not a comma-separated string):
client.create_text(blogName, tags=['hello', 'world'], ...)
Getting notes for a post
In order to get the notes for a post, you need to have the post id and the blog that it is on.
data = client.notes(blogName, id='123456')
The results include a timestamp you can use to make future calls.
data = client.notes(blogName, id='123456', before_timestamp=data["_links"]["next"]["query_params"]["before_timestamp"])
# get posts with a given tag
client.tagged(tag, **params)
This client comes with a nice interactive console to run you through the OAuth process, grab your tokens (and store them for future use).
You'll need pyyaml
installed to run it, but then it's just:
$ python interactive-console.py
and away you go! Tokens are stored in ~/.tumblr
and are also shared by other Tumblr API clients like the Ruby client.
The tests (and coverage reports) are run with nose, like this:
python setup.py test
Author: tumblr
Source Code: https://github.com/tumblr/pytumblr
License: Apache-2.0 license
1672193100
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
, Dlib
and SFace
.
Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.
The easiest way to install deepface is to download it from PyPI
. It's going to install the library itself and its prerequisites as well.
$ pip install deepface
DeepFace is also available at Conda
. You can alternatively install the package via conda.
$ conda install -c conda-forge deepface
Then you will be able to import the library and use its functionalities.
from deepface import DeepFace
Facial Recognition - Demo
A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.
Face Verification - Demo
This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or base64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")
Face recognition - Demo
Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")
Embeddings
Face recognition models basically represent facial images as multi-dimensional vectors. Sometimes, you need those embedding vectors directly. DeepFace comes with a dedicated representation function.
embedding = DeepFace.represent(img_path = "img.jpg")
This function returns an array as output. The size of the output array would be different based on the model name. For instance, VGG-Face is the default model for deepface and it represents facial images as 2622 dimensional vectors.
assert isinstance(embedding, list)
assert model_name = "VGG-Face" and len(embedding) == 2622
Here, embedding is also plotted with 2622 slots horizontally. Each slot is corresponding to a dimension value in the embedding vector and dimension value is explained in the colorbar on the right. Similar to 2D barcodes, vertical dimension stores no information in the illustration.
Face recognition models - Demo
Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
, Dlib
and SFace
. The default configuration uses VGG-Face model.
models = [
"VGG-Face",
"Facenet",
"Facenet512",
"OpenFace",
"DeepFace",
"DeepID",
"ArcFace",
"Dlib",
"SFace",
]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg",
img2_path = "img2.jpg",
model_name = models[1]
)
#face recognition
df = DeepFace.find(img_path = "img1.jpg",
db_path = "C:/workspace/my_db",
model_name = models[1]
)
#embeddings
embedding = DeepFace.represent(img_path = "img.jpg",
model_name = models[1]
)
FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments. You can find out the scores of those models below on both Labeled Faces in the Wild and YouTube Faces in the Wild data sets declared by its creators.
Model | LFW Score | YTF Score |
---|---|---|
Facenet512 | 99.65% | - |
SFace | 99.60% | - |
ArcFace | 99.41% | - |
Dlib | 99.38 % | - |
Facenet | 99.20% | - |
VGG-Face | 98.78% | 97.40% |
Human-beings | 97.53% | - |
OpenFace | 93.80% | - |
DeepID | - | 97.05% |
Similarity
Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.
Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.
metrics = ["cosine", "euclidean", "euclidean_l2"]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg",
img2_path = "img2.jpg",
distance_metric = metrics[1]
)
#face recognition
df = DeepFace.find(img_path = "img1.jpg",
db_path = "C:/workspace/my_db",
distance_metric = metrics[1]
)
Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.
Facial Attribute Analysis - Demo
Deepface also comes with a strong facial attribute analysis module including age
, gender
, facial expression
(including angry, fear, neutral, sad, disgust, happy and surprise) and race
(including asian, white, middle eastern, indian, latino and black) predictions.
obj = DeepFace.analyze(img_path = "img4.jpg",
actions = ['age', 'gender', 'race', 'emotion']
)
Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.
Face Detectors - Demo
Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV
, SSD
, Dlib
, MTCNN
, RetinaFace
and MediaPipe
detectors are wrapped in deepface.
All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.
backends = [
'opencv',
'ssd',
'dlib',
'mtcnn',
'retinaface',
'mediapipe'
]
#face verification
obj = DeepFace.verify(img1_path = "img1.jpg",
img2_path = "img2.jpg",
detector_backend = backends[4]
)
#face recognition
df = DeepFace.find(img_path = "img.jpg",
db_path = "my_db",
detector_backend = backends[4]
)
#embeddings
embedding = DeepFace.represent(img_path = "img.jpg",
detector_backend = backends[4]
)
#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg",
detector_backend = backends[4]
)
#face detection and alignment
face = DeepFace.detectFace(img_path = "img.jpg",
target_size = (224, 224),
detector_backend = backends[4]
)
Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.
RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.
The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.
You can find out more about RetinaFace on this repo.
Real Time Analysis - Demo
You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequentially 5 frames. Then, it shows results 5 seconds.
DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")
Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.
user
├── database
│ ├── Alice
│ │ ├── Alice1.jpg
│ │ ├── Alice2.jpg
│ ├── Bob
│ │ ├── Bob.jpg
API - Demo
Deepface serves an API as well. You can clone /api/api.py
and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.
python api.py
Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify
for face recognition, http://127.0.0.1:5000/analyze
for facial attribute analysis, and http://127.0.0.1:5000/represent
for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.
Command Line Interface
DeepFace comes with a command line interface as well. You are able to access its functions in command line as shown below. The command deepface expects the function name as 1st argument and function arguments thereafter.
#face verification
$ deepface verify -img1_path tests/dataset/img1.jpg -img2_path tests/dataset/img2.jpg
#facial analysis
$ deepface analyze -img_path tests/dataset/img1.jpg
Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.
Pull requests are welcome! You should run the unit tests locally by running test/unit_tests.py
. Once a PR sent, GitHub test workflow will be run automatically and unit test results will be available in GitHub actions before approval.
There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏
You can also support this work on Patreon
Please cite deepface in your publications if it helps your research. Here are its BibTex entries:
If you use deepface for facial recogntion purposes, please cite the this publication.
@inproceedings{serengil2020lightface,
title = {LightFace: A Hybrid Deep Face Recognition Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
pages = {23-27},
year = {2020},
doi = {10.1109/ASYU50717.2020.9259802},
url = {https://doi.org/10.1109/ASYU50717.2020.9259802},
organization = {IEEE}
}
If you use deepface for facial attribute analysis purposes such as age, gender, emotion or ethnicity prediction, please cite the this publication.
@inproceedings{serengil2021lightface,
title = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
pages = {1-4},
year = {2021},
doi = {10.1109/ICEET53442.2021.9659697},
url = {https://doi.org/10.1109/ICEET53442.2021.9659697},
organization = {IEEE}
}
Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt
.
Author: Serengil
Source Code: https://github.com/serengil/deepface
License: MIT license
1641812160
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
and Dlib
.
Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.
The easiest way to install deepface is to download it from PyPI
. It's going to install the library itself and its prerequisites as well. The library is mainly based on TensorFlow and Keras.
pip install deepface
Then you will be able to import the library and use its functionalities.
from deepface import DeepFace
Facial Recognition - Demo
A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.
Face Verification - Demo
This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or based64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")
Face recognition - Demo
Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")
Face recognition models - Demo
Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
and Dlib
. The default configuration uses VGG-Face model.
models = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib"]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", model_name = models[1])
#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_name = models[1])
FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments. You can find out the scores of those models below on both Labeled Faces in the Wild and YouTube Faces in the Wild data sets declared by its creators.
Model | LFW Score | YTF Score |
---|---|---|
Facenet512 | 99.65% | - |
ArcFace | 99.41% | - |
Dlib | 99.38 % | - |
Facenet | 99.20% | - |
VGG-Face | 98.78% | 97.40% |
Human-beings | 97.53% | - |
OpenFace | 93.80% | - |
DeepID | - | 97.05% |
Similarity
Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.
Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.
metrics = ["cosine", "euclidean", "euclidean_l2"]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", distance_metric = metrics[1])
#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", distance_metric = metrics[1])
Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.
Facial Attribute Analysis - Demo
Deepface also comes with a strong facial attribute analysis module including age
, gender
, facial expression
(including angry, fear, neutral, sad, disgust, happy and surprise) and race
(including asian, white, middle eastern, indian, latino and black) predictions.
obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])
Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.
Streaming and Real Time Analysis - Demo
You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.
DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")
Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.
user
├── database
│ ├── Alice
│ │ ├── Alice1.jpg
│ │ ├── Alice2.jpg
│ ├── Bob
│ │ ├── Bob.jpg
Face Detectors - Demo
Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV
, SSD
, Dlib
, MTCNN
and RetinaFace
detectors are wrapped in deepface.
All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.
backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface']
#face verification
obj = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", detector_backend = backends[4])
#face recognition
df = DeepFace.find(img_path = "img.jpg", db_path = "my_db", detector_backend = backends[4])
#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg", detector_backend = backends[4])
#face detection and alignment
face = DeepFace.detectFace(img_path = "img.jpg", target_size = (224, 224), detector_backend = backends[4])
Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.
RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.
The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.
You can find out more about RetinaFace on this repo.
API - Demo
Deepface serves an API as well. You can clone /api/api.py
and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.
python api.py
Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify
for face recognition, http://127.0.0.1:5000/analyze
for facial attribute analysis, and http://127.0.0.1:5000/represent
for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.
Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Herein, deepface offers a represention function to find vector embeddings from facial images.
embedding = DeepFace.represent(img_path = "img.jpg", model_name = 'Facenet')
Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.
Pull requests are welcome. You should run the unit tests locally by running test/unit_tests.py
. Please share the unit test result logs in the PR. Deepface is currently compatible with TF 1 and 2 versions. Change requests should satisfy those requirements both.
There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏
You can also support this work on Patreon
Please cite deepface in your publications if it helps your research. Here are its BibTeX entries:
@inproceedings{serengil2020lightface,
title = {LightFace: A Hybrid Deep Face Recognition Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
pages = {23-27},
year = {2020},
doi = {10.1109/ASYU50717.2020.9259802},
url = {https://doi.org/10.1109/ASYU50717.2020.9259802},
organization = {IEEE}
}
@inproceedings{serengil2021lightface,
title = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
pages = {1-4},
year = {2021},
doi = {10.1109/ICEET53442.2021.9659697},
url. = {https://doi.org/10.1109/ICEET53442.2021.9659697},
organization = {IEEE}
}
Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt.
Author: Serengil
Source Code: https://github.com/serengil/deepface
License: MIT License
1648217849
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
and Dlib
.
Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.
The easiest way to install deepface is to download it from PyPI
. It's going to install the library itself and its prerequisites as well. The library is mainly powered by TensorFlow and Keras.
pip install deepface
Then you will be able to import the library and use its functionalities.
from deepface import DeepFace
Facial Recognition - Demo
A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.
Face Verification - Demo
This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or based64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")
Face recognition - Demo
Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")
Face recognition models - Demo
Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
and Dlib
. The default configuration uses VGG-Face model.
models = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib"]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", model_name = models[1])
#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_name = models[1])
FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments. You can find out the scores of those models below on both Labeled Faces in the Wild and YouTube Faces in the Wild data sets declared by its creators.
Model | LFW Score | YTF Score |
---|---|---|
Facenet512 | 99.65% | - |
ArcFace | 99.41% | - |
Dlib | 99.38 % | - |
Facenet | 99.20% | - |
VGG-Face | 98.78% | 97.40% |
Human-beings | 97.53% | - |
OpenFace | 93.80% | - |
DeepID | - | 97.05% |
Similarity
Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.
Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.
metrics = ["cosine", "euclidean", "euclidean_l2"]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", distance_metric = metrics[1])
#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", distance_metric = metrics[1])
Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.
Facial Attribute Analysis - Demo
Deepface also comes with a strong facial attribute analysis module including age
, gender
, facial expression
(including angry, fear, neutral, sad, disgust, happy and surprise) and race
(including asian, white, middle eastern, indian, latino and black) predictions.
obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])
Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.
Streaming and Real Time Analysis - Demo
You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.
DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")
Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.
user
├── database
│ ├── Alice
│ │ ├── Alice1.jpg
│ │ ├── Alice2.jpg
│ ├── Bob
│ │ ├── Bob.jpg
Face Detectors - Demo
Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV
, SSD
, Dlib
, MTCNN
, RetinaFace
and MediaPipe
detectors are wrapped in deepface.
All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.
backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface', 'mediapipe']
#face verification
obj = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", detector_backend = backends[4])
#face recognition
df = DeepFace.find(img_path = "img.jpg", db_path = "my_db", detector_backend = backends[4])
#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg", detector_backend = backends[4])
#face detection and alignment
face = DeepFace.detectFace(img_path = "img.jpg", target_size = (224, 224), detector_backend = backends[4])
Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.
RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.
The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.
You can find out more about RetinaFace on this repo.
API - Demo
Deepface serves an API as well. You can clone /api/api.py
and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.
python api.py
Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify
for face recognition, http://127.0.0.1:5000/analyze
for facial attribute analysis, and http://127.0.0.1:5000/represent
for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.
Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Herein, deepface offers a represention function to find vector embeddings from facial images.
embedding = DeepFace.represent(img_path = "img.jpg", model_name = 'Facenet')
Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.
Pull requests are welcome. You should run the unit tests locally by running test/unit_tests.py
. Please share the unit test result logs in the PR. Deepface is currently compatible with TF 1 and 2 versions. Change requests should satisfy those requirements both.
There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏
You can also support this work on Patreon
Please cite deepface in your publications if it helps your research. Here are BibTeX entries:
@inproceedings{serengil2020lightface,
title = {LightFace: A Hybrid Deep Face Recognition Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
pages = {23-27},
year = {2020},
doi = {10.1109/ASYU50717.2020.9259802},
url = {https://doi.org/10.1109/ASYU50717.2020.9259802},
organization = {IEEE}
}
@inproceedings{serengil2021lightface,
title = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
pages = {1-4},
year = {2021},
doi = {10.1109/ICEET53442.2021.9659697},
url = {https://doi.org/10.1109/ICEET53442.2021.9659697},
organization = {IEEE}
}
Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt.
Download Details:
Author: serengil
Source Code: https://github.com/serengil/deepface
License: MIT License
1632537859
Not babashka. Node.js babashka!?
Ad-hoc CLJS scripting on Node.js.
Experimental. Please report issues here.
Nbb's main goal is to make it easy to get started with ad hoc CLJS scripting on Node.js.
Additional goals and features are:
Nbb requires Node.js v12 or newer.
CLJS code is evaluated through SCI, the same interpreter that powers babashka. Because SCI works with advanced compilation, the bundle size, especially when combined with other dependencies, is smaller than what you get with self-hosted CLJS. That makes startup faster. The trade-off is that execution is less performant and that only a subset of CLJS is available (e.g. no deftype, yet).
Install nbb
from NPM:
$ npm install nbb -g
Omit -g
for a local install.
Try out an expression:
$ nbb -e '(+ 1 2 3)'
6
And then install some other NPM libraries to use in the script. E.g.:
$ npm install csv-parse shelljs zx
Create a script which uses the NPM libraries:
(ns script
(:require ["csv-parse/lib/sync$default" :as csv-parse]
["fs" :as fs]
["path" :as path]
["shelljs$default" :as sh]
["term-size$default" :as term-size]
["zx$default" :as zx]
["zx$fs" :as zxfs]
[nbb.core :refer [*file*]]))
(prn (path/resolve "."))
(prn (term-size))
(println (count (str (fs/readFileSync *file*))))
(prn (sh/ls "."))
(prn (csv-parse "foo,bar"))
(prn (zxfs/existsSync *file*))
(zx/$ #js ["ls"])
Call the script:
$ nbb script.cljs
"/private/tmp/test-script"
#js {:columns 216, :rows 47}
510
#js ["node_modules" "package-lock.json" "package.json" "script.cljs"]
#js [#js ["foo" "bar"]]
true
$ ls
node_modules
package-lock.json
package.json
script.cljs
Nbb has first class support for macros: you can define them right inside your .cljs
file, like you are used to from JVM Clojure. Consider the plet
macro to make working with promises more palatable:
(defmacro plet
[bindings & body]
(let [binding-pairs (reverse (partition 2 bindings))
body (cons 'do body)]
(reduce (fn [body [sym expr]]
(let [expr (list '.resolve 'js/Promise expr)]
(list '.then expr (list 'clojure.core/fn (vector sym)
body))))
body
binding-pairs)))
Using this macro we can look async code more like sync code. Consider this puppeteer example:
(-> (.launch puppeteer)
(.then (fn [browser]
(-> (.newPage browser)
(.then (fn [page]
(-> (.goto page "https://clojure.org")
(.then #(.screenshot page #js{:path "screenshot.png"}))
(.catch #(js/console.log %))
(.then #(.close browser)))))))))
Using plet
this becomes:
(plet [browser (.launch puppeteer)
page (.newPage browser)
_ (.goto page "https://clojure.org")
_ (-> (.screenshot page #js{:path "screenshot.png"})
(.catch #(js/console.log %)))]
(.close browser))
See the puppeteer example for the full code.
Since v0.0.36, nbb includes promesa which is a library to deal with promises. The above plet
macro is similar to promesa.core/let
.
$ time nbb -e '(+ 1 2 3)'
6
nbb -e '(+ 1 2 3)' 0.17s user 0.02s system 109% cpu 0.168 total
The baseline startup time for a script is about 170ms seconds on my laptop. When invoked via npx
this adds another 300ms or so, so for faster startup, either use a globally installed nbb
or use $(npm bin)/nbb script.cljs
to bypass npx
.
Nbb does not depend on any NPM dependencies. All NPM libraries loaded by a script are resolved relative to that script. When using the Reagent module, React is resolved in the same way as any other NPM library.
To load .cljs
files from local paths or dependencies, you can use the --classpath
argument. The current dir is added to the classpath automatically. So if there is a file foo/bar.cljs
relative to your current dir, then you can load it via (:require [foo.bar :as fb])
. Note that nbb
uses the same naming conventions for namespaces and directories as other Clojure tools: foo-bar
in the namespace name becomes foo_bar
in the directory name.
To load dependencies from the Clojure ecosystem, you can use the Clojure CLI or babashka to download them and produce a classpath:
$ classpath="$(clojure -A:nbb -Spath -Sdeps '{:aliases {:nbb {:replace-deps {com.github.seancorfield/honeysql {:git/tag "v2.0.0-rc5" :git/sha "01c3a55"}}}}}')"
and then feed it to the --classpath
argument:
$ nbb --classpath "$classpath" -e "(require '[honey.sql :as sql]) (sql/format {:select :foo :from :bar :where [:= :baz 2]})"
["SELECT foo FROM bar WHERE baz = ?" 2]
Currently nbb
only reads from directories, not jar files, so you are encouraged to use git libs. Support for .jar
files will be added later.
The name of the file that is currently being executed is available via nbb.core/*file*
or on the metadata of vars:
(ns foo
(:require [nbb.core :refer [*file*]]))
(prn *file*) ;; "/private/tmp/foo.cljs"
(defn f [])
(prn (:file (meta #'f))) ;; "/private/tmp/foo.cljs"
Nbb includes reagent.core
which will be lazily loaded when required. You can use this together with ink to create a TUI application:
$ npm install ink
ink-demo.cljs
:
(ns ink-demo
(:require ["ink" :refer [render Text]]
[reagent.core :as r]))
(defonce state (r/atom 0))
(doseq [n (range 1 11)]
(js/setTimeout #(swap! state inc) (* n 500)))
(defn hello []
[:> Text {:color "green"} "Hello, world! " @state])
(render (r/as-element [hello]))
Working with callbacks and promises can become tedious. Since nbb v0.0.36 the promesa.core
namespace is included with the let
and do!
macros. An example:
(ns prom
(:require [promesa.core :as p]))
(defn sleep [ms]
(js/Promise.
(fn [resolve _]
(js/setTimeout resolve ms))))
(defn do-stuff
[]
(p/do!
(println "Doing stuff which takes a while")
(sleep 1000)
1))
(p/let [a (do-stuff)
b (inc a)
c (do-stuff)
d (+ b c)]
(prn d))
$ nbb prom.cljs
Doing stuff which takes a while
Doing stuff which takes a while
3
Also see API docs.
Since nbb v0.0.75 applied-science/js-interop is available:
(ns example
(:require [applied-science.js-interop :as j]))
(def o (j/lit {:a 1 :b 2 :c {:d 1}}))
(prn (j/select-keys o [:a :b])) ;; #js {:a 1, :b 2}
(prn (j/get-in o [:c :d])) ;; 1
Most of this library is supported in nbb, except the following:
:syms
.-x
notation. In nbb, you must use keywords.See the example of what is currently supported.
See the examples directory for small examples.
Also check out these projects built with nbb:
See API documentation.
See this gist on how to convert an nbb script or project to shadow-cljs.
Prequisites:
To build:
bb release
Run bb tasks
for more project-related tasks.
Download Details:
Author: borkdude
Download Link: Download The Source Code
Official Website: https://github.com/borkdude/nbb
License: EPL-1.0
#node #javascript