Michio JP

Michio JP

1556850698

How to set up face verification the easy way using HTML5 + JavaScript

I’ve created a very simple way to face match two images using HTML5 and JavaScript. You upload the verification picture you’d like to use, take a snapshot from the video streaming from your camera/webcam, then use a face matching API to retrieve the results. Simple.

The Github Repo

What You’ll Need:

  • A Webcam/Camera
  • Free Face Recognition API Key
  • Web Server

Before We Begin

Create a Directory For This Project

This directory will be where we put all the files.

Create a Local Web Server

In order to have full control of images, a web server is required otherwise we’d be getting a tainted canvas security error. There are several ways to do this and I’ve listed below how to do it with Python.

Python

cd C:/DIRECTORY_LOCATION & py -m http.server 8000

You should be able to access the directory through http://localhost:8000

Get Your Free API Key

We’re going to be using Facesoft’s face recognition API. Quickly sign up here to access your free API key so you can get unlimited API calls with up to two requests per minute.

Once you’ve logged in, your API key will be visible in the dashboard area.

1. Setup

Create these three files:

  • index.html
  • style.css
  • verify.js

Next right click and save the files below into that directory. These image files will be the default images for uploaded and verification pics.

SAVE AS “defaultupload.png”

SAVE AS “defaultphoto.png”

2. The HTML

Layout

Copy and paste the layout code below into your “index.html” file. This gives us our framework and design with the help of bootstrap.

<!DOCTYPE html>
<html lang="en">
<head>
  <!-- Required meta tags -->
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<!-- Bootstrap CSS -->
  <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">
<!-- Style CSS -->
  <link rel="stylesheet" href="style.css">
<title>Face Verification</title>
</head>
<body>
  <!-- Page Content -->
  <div class="container">
    <div class="row">
      <div class="col-lg-12 text-center">
        <h1 class="mt-5">Face Verification</h1>
        <p class="lead">Quick and simple face verification using HTML5 and JavaScript</p>
      </div>
    </div>
    <!-- INSERT NEXT CODE HERE -->
  </div>
<!-- Verify JS -->
 <script src="verify.js"></script>
<!-- jQuery first, then Popper.js, then Bootstrap JS -->
  <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script>
  <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script>
</body>
</html>

Images, Canvases & Buttons

We’re now creating a row and having three columns for the verification photo, the video stream, and the photo taken from the video stream. Add the code below right after the row.

Check for “INSERT NEXT CODE HERE” tag in previous code.

<div class="row justify-content-md-center">
  <div class="col-lg-4 text-center">
    <p><strong>Verification Photo</strong></p>
      <!-- Canvas For Uploaded Image -->
      <canvas id="uploadCanvas" width="300"  height="300"></canvas>
      <!-- Default Canvas Image -->
      <img src="defaultupload.png" id="uploadedPhoto" alt="upload"/>
      <!-- Upload Image Input & Upload Photo Button -->
      <input type="file" name="image-upload" accept="image/png, image/jpeg">
      <button id="upload" type="button" class="btn btn-outline-primary btn-lg">Upload Photo</button>
  </div>
  <div class="col-lg-4 text-center">
    <p><strong>Video</strong></p>
    <!-- Camera -->
    <div class="camera-container">
      <video id="video" width="100%" height="300" autoplay="true">
      </video>
    </div>
    <!-- Take Photo Button -->
    <button id="capture" type="button" class="btn btn-outline-primary btn-lg">Take Photo</button>
  </div>
  <div class="col-lg-4 text-center">
    <p><strong>Photo Taken</strong></p>
  
    <!-- Canvas For Capture Taken -->
    <canvas id="captureCanvas" width="300"  height="300"></canvas>
    <!-- Default Canvas Image -->
    <img src="defaultphoto.png" id="capturedPhoto" alt="capture" />
    <!-- Verify Photos Button -->
    <button id="verify" type="button" class="btn btn-outline-success btn-lg">Verify Photo</button>
  </div>
</div>
<!-- INSERT NEXT CODE HERE -->

API Response and Warnings

The code we’re going to add is to display the match result, score percentage, errors and warnings. Right under the last code we added, add the code below.

<div class="row">
  <div class="col-lg-12 text-center">
    <!-- API Match Result & API Percentage Score -->
    <h2 id="match" class="mt-5"></h2>
    <p id="score" class="lead"></p>
  </div>
  <div class="col-lg-12 text-center">
    <!-- Error & Warning Alerts -->
    <div class="alert alert-danger" id="errorAlert"></div>
    <div class="alert alert-warning" id="warningAlert"></div>
  </div>
</div>

3. The CSS

Add the code below to your style.css file.

.camera-container {
  max-width: 100%;
  border: 1px solid black;
}
.verification-image {
  width: 300px;
  height: auto;
  max-width: 100%;
}
.btn {
  margin-top: 10px;
}
#captureCanvas, #uploadCanvas {
  display: none;
}
input[name="image-upload"] {
  display: none;
}
#errorAlert, #warningAlert {
  display: none;
}

We’ve set the image upload input to display none as we’ll be triggering it using the upload button. Also, the IDs for the canvases have been set to display none so the default images are initially displayed.

Here is what your screen should look like:

4. The JavaScript

document.addEventListener("DOMContentLoaded", function() {
});

In the verify.js file we want to start off by adding an event listener that will run after the page loads. Every code we enter should be inside this function.

Variables

var video = document.getElementById('video'), 
captureCanvas = document.getElementById('captureCanvas'), 
uploadCanvas = document.getElementById('uploadCanvas'), 
captureContext = captureCanvas.getContext('2d'),
uploadContext = uploadCanvas.getContext('2d'),
uploadedPhoto = document.getElementById('uploadedPhoto'),
capturedPhoto = document.getElementById('capturedPhoto'),
imageUploadInput = document.querySelector('[name="image-upload"]'),
apiKey = 'INSERT_YOUR_FACESOFT_API_KEY',
errorAlert = document.getElementById('errorAlert'), AlertwarningAlert = document.getElementById('warningAlert'),
matchText = document.getElementById('match'),
scoreText = document.getElementById('score');

The variables are:

  • IDs for the video element, canvases, photos & API response
  • Selector for image input
  • Canvas contexts
  • API key

Video Stream

Here is a very simple code to access your webcam/camera and stream it into the video element. Add underneath variables.

// Stream Camera To Video Element
if(navigator.mediaDevices.getUserMedia){
  navigator.mediaDevices.getUserMedia({ video: true })
  .then(function(stream) {
    video.srcObject = stream;
  }).catch(function(error) {
    console.log(error)
  })
}

If you refresh your page, this is what you’ll see:

:D

Function 1: Set Photo To Canvas

// Set Photo To Canvas Function
function setImageToCanvas(image, id, canvas, context, width=image.width, height=image.height) {
  var ratio = width / height;
  var newWidth = canvas.width;
  var newHeight = newWidth / ratio;
  if (newHeight > canvas.height) {
    newHeight = canvas.height;
    newWidth = newHeight * ratio;
  }
  context.clearRect(0, 0, canvas.width, canvas.height);
  context.drawImage(image, 0, 0, newWidth, newHeight);
  id.setAttribute('src', canvas.toDataURL('image/png'));
}

In this function, we take in the image, id, canvas, context, width and height. We take in the width and height because to get the dimensions of the video, we must use video.videoWidth & video.videoHeight.

We also get the aspect ratio of the image so that when we assign an image, it fits right into the canvas. The rest of the code clears the canvas and draws the new image into the canvas.

Function 2: Verify If The Photos Match By Sending Them To The API.

// Facesoft Face Match API Function
function verifyImages(image1, image2, callback){
  var params = {
    image1: image1,
    image2: image2,
  }
  var xhr = new XMLHttpRequest();
  xhr.open("POST", "https://api.facesoft.io/v1/face/match");
  xhr.setRequestHeader("apikey", apiKey);
  xhr.setRequestHeader("Content-Type", "application/json;charset=UTF-8");
  xhr.onload = function(){
    callback(xhr.response);
  }
  xhr.send(JSON.stringify(params));
}

In this function we make an XMLHttpRequest to the face match API endpoint. We’ve added the API key for authorisation and content type into the header. For the body we’re passing in an object containing the two images.

Now we’re done with functions :)

Upload Photo Button Click Event

// On Upload Photo Button Click
document.getElementById('upload').addEventListener('click', function(){
  imageUploadInput.click();
})

This click event listener for the upload button triggers a click for the image input.

Image Upload Input Change Event

// On Uploaded Photo Change
imageUploadInput.addEventListener('change', function(){
  // Get File Extension
  var ext = imageUploadInput.files[0]['name'].substring(imageUploadInput.files[0]['name'].lastIndexOf('.') + 1).toLowerCase();
  // If File Exists & Image
  if (imageUploadInput.files && imageUploadInput.files[0] && (ext == "png" || ext == "jpeg" || ext == "jpg")) {
    // Set Photo To Canvas
    var reader = new FileReader();
    reader.onload = function (e) {
      var img = new Image();
      img.src = event.target.result;
      img.onload = function() {
      setImageToCanvas(img, uploadedPhoto, uploadCanvas, uploadContext);
      }
    }
    reader.readAsDataURL(imageUploadInput.files[0]);
  }
})

In this change event listener, we retrieve the file extension and perform an if statement to check if there is an image file in the input. Then we use FileReader to load the image onto the canvas.

Take Photo Button Click Event

// On Take Photo Button Click
document.getElementById('capture').addEventListener('click', function(){
  setImageToCanvas(video, capturedPhoto, captureCanvas, captureContext, video.videoWidth, video.videoHeight);
})

This event listener now executes the set image to canvas to capture a still frame from the video and assign it into a canvas.

Verify Photo Button Click Event

// On Verify Photo Button Click
document.getElementById('verify').addEventListener('click', function(){
  // Remove Results & Alerts
  errorAlert.style.display = "none";
  warningAlert.style.display = "none";
  matchText.innerHTML = "";
  scoreText.innerHTML = "";
  // Get Base64
  var image1 = captureCanvas.toDataURL().split(',')[1];
  var image2 = uploadCanvas.toDataURL().split(',')[1]; 
  // Verify if images are of the same person
  verifyImages(image1, image2, function(response){
    if(response){
      var obj = JSON.parse(response);
      
      // If Warning Message
     
      if(obj.message){
        errorAlert.style.display = "none";
        warningAlert.style.display = "block";
        warningAlert.innerHTML = obj.message;
        matchText.innerHTML = "";
        scoreText.innerHTML = "";
      }
      // If Error
      else if(obj.error){
        errorAlert.style.display = "block";
        errorAlert.innerHTML = obj.error;
        warningAlert.style.display = "none";
        matchText.innerHTML = "";
        scoreText.innerHTML = "";
      }
      // If Valid
      else{
        errorAlert.style.display = "none";
        warningAlert.style.display = "none";
        matchText.innerHTML = obj.match;
        scoreText.innerHTML = (obj.score*100).toFixed(2)+"% Score";
      }
    }
  })
})

In this event, we first hide the error/warning alerts and remove any text from the match result and score percentage.

We then get the base64 of the images from the canvases and use the split method to only get the part without “ data:image/png;base64” so the API won’t return an error.

Lastly, we call the verify images function to send the data to the API and our response will be an object either containing the results, an error, or a message.

Final Code

// Set Default Images For Uploaded & Captured Photo
setImageToCanvas(uploadedPhoto, uploadedPhoto, uploadCanvas, uploadContext);
setImageToCanvas(capturedPhoto, capturedPhoto, captureCanvas, captureContext);

This will allow us to verify the default images by assigning them to their canvas.

5. The Rock vs Dwayne Johnson

If you click verify we shall now see if the API can tell the difference between The Rock at a young age and Dwayne Johnson we usually see in films…

The API correctly identifies the 2 as the same with a 96.53% match score!

Upload, Capture, Verify

In Closing

You should now be able to think of better, secure and more complex ways to implement face matching such as logging in, 2FA, authorisation, payments etc.

Accuracy & Reliability?

The results for the best face recognition algorithm in the world came out. The algorithm we’re using came in the top 10 beating Toshiba, Microsoft & VisionLabs, and came 2nd worldwide in the wild image test (detection at difficult angles).

#javascript #html5

What is GEEK

Buddha Community

How to set up face verification the easy way using HTML5 + JavaScript
Hermann  Frami

Hermann Frami

1651383480

A Simple Wrapper Around Amplify AppSync Simulator

This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.

Install

npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator

Usage

This plugin relies on your serverless yml file and on the serverless-offline plugin.

plugins:
  - serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
  - serverless-appsync-simulator
  - serverless-offline

Note: Order is important serverless-appsync-simulator must go before serverless-offline

To start the simulator, run the following command:

sls offline start

You should see in the logs something like:

...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...

Configuration

Put options under custom.appsync-simulator in your serverless.yml file

| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789 | When using API_KEY as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |

Example:

custom:
  appsync-simulator:
    location: '.webpack/service' # use webpack build directory
    dynamoDb:
      endpoint: 'http://my-custom-dynamo:8000'

Hot-reloading

By default, the simulator will hot-relad when changes to *.graphql or *.vtl files are detected. Changes to *.yml files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.

Hot-reloading relies on watchman. Make sure it is installed on your system.

You can change the files being watched with the watch option, which is then passed to watchman as the match expression.

e.g.

custom:
  appsync-simulator:
    watch:
      - ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
      - "*.graphql"                                 # => string like this is equivalent to `["match", "*.graphql"]`

Or you can opt-out by leaving an empty array or set the option to false

Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.

Resource CloudFormation functions resolution

This plugin supports some resources resolution from the Ref, Fn::GetAtt and Fn::ImportValue functions in your yaml file. It also supports some other Cfn functions such as Fn::Join, Fb::Sub, etc.

Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation

Basic usage

You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.

provider:
  environment:
    BUCKET_NAME:
      Ref: MyBucket # resolves to `my-bucket-name`

resources:
  Resources:
    MyDbTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: myTable
      ...
    MyBucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: my-bucket-name
    ...

# in your appsync config
dataSources:
  - type: AMAZON_DYNAMODB
    name: dynamosource
    config:
      tableName:
        Ref: MyDbTable # resolves to `myTable`

Override (or mock) values

Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.

In those cases, you can define (or override) those values using the refMap, getAttMap and importValueMap options.

  • refMap takes a mapping of resource name to value pairs
  • getAttMap takes a mapping of resource name to attribute/values pairs
  • importValueMap takes a mapping of import name to values pairs

Example:

custom:
  appsync-simulator:
    refMap:
      # Override `MyDbTable` resolution from the previous example.
      MyDbTable: 'mock-myTable'
    getAttMap:
      # define ElasticSearchInstance DomainName
      ElasticSearchInstance:
        DomainEndpoint: 'localhost:9200'
    importValueMap:
      other-service-api-url: 'https://other.api.url.com/graphql'

# in your appsync config
dataSources:
  - type: AMAZON_ELASTICSEARCH
    name: elasticsource
    config:
      # endpoint resolves as 'http://localhost:9200'
      endpoint:
        Fn::Join:
          - ''
          - - https://
            - Fn::GetAtt:
                - ElasticSearchInstance
                - DomainEndpoint

Key-value mock notation

In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}) in the import name.

This notation can be used with all mocks - refMap, getAttMap and importValueMap

provider:
  environment:
    FINISH_ACTIVITY_FUNCTION_ARN:
      Fn::ImportValue: other-service-api-${self:provider.stage}-url

custom:
  serverless-appsync-simulator:
    importValueMap:
      - key: other-service-api-${self:provider.stage}-url
        value: 'https://other.api.url.com/graphql'

Limitations

This plugin only tries to resolve the following parts of the yml tree:

  • provider.environment
  • functions[*].environment
  • custom.appSync

If you have the need of resolving others, feel free to open an issue and explain your use case.

For now, the supported resources to be automatically resovled by Ref: are:

  • DynamoDb tables
  • S3 Buckets

Feel free to open a PR or an issue to extend them as well.

External functions

When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".

custom:
  appsync-simulator:
    functions:
      addUser:
        url: http://localhost:3016/2015-03-31/functions/addUser/invocations
        method: post
      addPost:
        url: https://jsonplaceholder.typicode.com/posts
        method: post

Supported Resolver types

This plugin supports resolvers implemented by amplify-appsync-simulator, as well as custom resolvers.

From Aws Amplify:

  • NONE
  • AWS_LAMBDA
  • AMAZON_DYNAMODB
  • PIPELINE

Implemented by this plugin

  • AMAZON_ELASTIC_SEARCH
  • HTTP
  • RELATIONAL_DATABASE

Relational Database

Sample VTL for a create mutation

#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #set( $discard = $cols.add("$toSnake") )
  #if( $util.isBoolean($ctx.args.input[$entry]) )
      #if( $ctx.args.input[$entry] )
        #set( $discard = $vals.add("1") )
      #else
        #set( $discard = $vals.add("0") )
      #end
  #else
      #set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
  #end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
  #set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
  #set( $colStr = "($colStr)" )
#end
{
  "version": "2018-05-29",
  "statements":   ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM    <name-of-table> ORDER BY id DESC LIMIT 1"]
}

Sample VTL for an update mutation

#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $cur = $ctx.args.input[$entry] )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #if( $util.isBoolean($cur) )
      #if( $cur )
        #set ( $cur = "1" )
      #else
        #set ( $cur = "0" )
      #end
  #end
  #if ( $util.isNullOrEmpty($update) )
      #set($update = "$toSnake$equals'$cur'" )
  #else
      #set($update = "$update,$toSnake$equals'$cur'" )
  #end
#end
{
  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}

Sample resolver for delete mutation

{
  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}

Sample mutation response VTL with support for handling AWSDateTime

#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
    #set ($index = $index + 1)
    #if ( $column["typeName"] == "timestamptz" )
        #set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
        #set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
        #set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
        $util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
    #end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
    #set ( $s = $mapKey.split("_") )
    #set ( $camelCase="" )
    #set ( $isFirst=true )
    #foreach($entry in $s)
        #if ( $isFirst )
          #set ( $first = $entry.substring(0,1) )
        #else
          #set ( $first = $entry.substring(0,1).toUpperCase() )
        #end
        #set ( $isFirst=false )
        #set ( $stringLength = $entry.length() )
        #set ( $remaining = $entry.substring(1, $stringLength) )
        #set ( $camelCase = "$camelCase$first$remaining" )
    #end
    $util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)

Using Variable Map

Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.

Will be escaped properly: null, true, and false values.

{
  "version": "2018-05-29",
  "statements":   [
    "UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
    "SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
  ],
  variableMap: {
    ":ID": $ctx.args.id,
##    ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
  }
}

Requires

Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator 
License: MIT License

#serverless #sync #graphql 

A Lightweight Face Recognition and Facial Attribute Analysis

deepface

Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib.

Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.

Installation

The easiest way to install deepface is to download it from PyPI. It's going to install the library itself and its prerequisites as well. The library is mainly based on TensorFlow and Keras.

pip install deepface

Then you will be able to import the library and use its functionalities.

from deepface import DeepFace

Facial Recognition - Demo

A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.

Face Verification - Demo

This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or based64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.

result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")

Face recognition - Demo

Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.

df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")

Face recognition models - Demo

Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face , Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib. The default configuration uses VGG-Face model.

models = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib"]

#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", model_name = models[1])

#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_name = models[1])

FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments. You can find out the scores of those models below on both Labeled Faces in the Wild and YouTube Faces in the Wild data sets declared by its creators.

ModelLFW ScoreYTF Score
Facenet51299.65%-
ArcFace99.41%-
Dlib99.38 %-
Facenet99.20%-
VGG-Face98.78%97.40%
Human-beings97.53%-
OpenFace93.80%-
DeepID-97.05%

Similarity

Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.

Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.

metrics = ["cosine", "euclidean", "euclidean_l2"]

#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", distance_metric = metrics[1])

#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", distance_metric = metrics[1])

Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.

Facial Attribute Analysis - Demo

Deepface also comes with a strong facial attribute analysis module including age, gender, facial expression (including angry, fear, neutral, sad, disgust, happy and surprise) and race (including asian, white, middle eastern, indian, latino and black) predictions.

obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])

Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.

Streaming and Real Time Analysis - Demo

You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.

DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")

Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.

user
├── database
│   ├── Alice
│   │   ├── Alice1.jpg
│   │   ├── Alice2.jpg
│   ├── Bob
│   │   ├── Bob.jpg

Face Detectors - Demo

Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV, SSD, Dlib, MTCNN and RetinaFace detectors are wrapped in deepface.

All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.

backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface']

#face verification
obj = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", detector_backend = backends[4])

#face recognition
df = DeepFace.find(img_path = "img.jpg", db_path = "my_db", detector_backend = backends[4])

#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg", detector_backend = backends[4])

#face detection and alignment
face = DeepFace.detectFace(img_path = "img.jpg", target_size = (224, 224), detector_backend = backends[4])

Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.

RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.

The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.

You can find out more about RetinaFace on this repo.

API - Demo

Deepface serves an API as well. You can clone /api/api.py and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.

python api.py

Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify for face recognition, http://127.0.0.1:5000/analyze for facial attribute analysis, and http://127.0.0.1:5000/represent for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.

Tech Stack - Vlog, Tutorial

Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Herein, deepface offers a represention function to find vector embeddings from facial images.

embedding = DeepFace.represent(img_path = "img.jpg", model_name = 'Facenet')

Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.

Contribution

Pull requests are welcome. You should run the unit tests locally by running test/unit_tests.py. Please share the unit test result logs in the PR. Deepface is currently compatible with TF 1 and 2 versions. Change requests should satisfy those requirements both.

Support

There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏

You can also support this work on Patreon

 

Citation

Please cite deepface in your publications if it helps your research. Here are its BibTeX entries:

@inproceedings{serengil2020lightface,
  title        = {LightFace: A Hybrid Deep Face Recognition Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
  pages        = {23-27},
  year         = {2020},
  doi          = {10.1109/ASYU50717.2020.9259802},
  url          = {https://doi.org/10.1109/ASYU50717.2020.9259802},
  organization = {IEEE}
}
@inproceedings{serengil2021lightface,
  title        = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
  pages        = {1-4},
  year         = {2021},
  doi          = {10.1109/ICEET53442.2021.9659697},
  url.         = {https://doi.org/10.1109/ICEET53442.2021.9659697},
  organization = {IEEE}
}

Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt.

Author: Serengil
Source Code: https://github.com/serengil/deepface 
License: MIT License

#python #machine-learning 

Dominic  Feeney

Dominic Feeney

1648217849

Deepface: A Face Recognition and Facial Attribute Analysis for Python

deepface

Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib.

Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.

Installation

The easiest way to install deepface is to download it from PyPI. It's going to install the library itself and its prerequisites as well. The library is mainly powered by TensorFlow and Keras.

pip install deepface

Then you will be able to import the library and use its functionalities.

from deepface import DeepFace

Facial Recognition - Demo

A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.

Face Verification - Demo

This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or based64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.

result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")

Face recognition - Demo

Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.

df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")

Face recognition models - Demo

Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face , Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib. The default configuration uses VGG-Face model.

models = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib"]

#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", model_name = models[1])

#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_name = models[1])

FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments. You can find out the scores of those models below on both Labeled Faces in the Wild and YouTube Faces in the Wild data sets declared by its creators.

ModelLFW ScoreYTF Score
Facenet51299.65%-
ArcFace99.41%-
Dlib99.38 %-
Facenet99.20%-
VGG-Face98.78%97.40%
Human-beings97.53%-
OpenFace93.80%-
DeepID-97.05%

Similarity

Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.

Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.

metrics = ["cosine", "euclidean", "euclidean_l2"]

#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", distance_metric = metrics[1])

#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", distance_metric = metrics[1])

Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.

Facial Attribute Analysis - Demo

Deepface also comes with a strong facial attribute analysis module including age, gender, facial expression (including angry, fear, neutral, sad, disgust, happy and surprise) and race (including asian, white, middle eastern, indian, latino and black) predictions.

obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])

Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.

Streaming and Real Time Analysis - Demo

You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.

DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")

Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.

user
├── database
│   ├── Alice
│   │   ├── Alice1.jpg
│   │   ├── Alice2.jpg
│   ├── Bob
│   │   ├── Bob.jpg

Face Detectors - Demo

Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV, SSD, Dlib, MTCNN, RetinaFace and MediaPipe detectors are wrapped in deepface.

All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.

backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface', 'mediapipe']

#face verification
obj = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", detector_backend = backends[4])

#face recognition
df = DeepFace.find(img_path = "img.jpg", db_path = "my_db", detector_backend = backends[4])

#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg", detector_backend = backends[4])

#face detection and alignment
face = DeepFace.detectFace(img_path = "img.jpg", target_size = (224, 224), detector_backend = backends[4])

Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.

RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.

The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.

You can find out more about RetinaFace on this repo.

API - Demo

Deepface serves an API as well. You can clone /api/api.py and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.

python api.py

Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify for face recognition, http://127.0.0.1:5000/analyze for facial attribute analysis, and http://127.0.0.1:5000/represent for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.

Tech Stack - Vlog, Tutorial

Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Herein, deepface offers a represention function to find vector embeddings from facial images.

embedding = DeepFace.represent(img_path = "img.jpg", model_name = 'Facenet')

Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.

Contribution

Pull requests are welcome. You should run the unit tests locally by running test/unit_tests.py. Please share the unit test result logs in the PR. Deepface is currently compatible with TF 1 and 2 versions. Change requests should satisfy those requirements both.

Support

There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏

You can also support this work on Patreon

 

Citation

Please cite deepface in your publications if it helps your research. Here are BibTeX entries:

@inproceedings{serengil2020lightface,
  title        = {LightFace: A Hybrid Deep Face Recognition Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
  pages        = {23-27},
  year         = {2020},
  doi          = {10.1109/ASYU50717.2020.9259802},
  url          = {https://doi.org/10.1109/ASYU50717.2020.9259802},
  organization = {IEEE}
}
@inproceedings{serengil2021lightface,
  title        = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
  pages        = {1-4},
  year         = {2021},
  doi          = {10.1109/ICEET53442.2021.9659697},
  url          = {https://doi.org/10.1109/ICEET53442.2021.9659697},
  organization = {IEEE}
}

Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt.

Download Details:
Author: serengil
Source Code: https://github.com/serengil/deepface
License: MIT License

#tensorflow  #python #machinelearning 

Michio JP

Michio JP

1556850698

How to set up face verification the easy way using HTML5 + JavaScript

I’ve created a very simple way to face match two images using HTML5 and JavaScript. You upload the verification picture you’d like to use, take a snapshot from the video streaming from your camera/webcam, then use a face matching API to retrieve the results. Simple.

The Github Repo

What You’ll Need:

  • A Webcam/Camera
  • Free Face Recognition API Key
  • Web Server

Before We Begin

Create a Directory For This Project

This directory will be where we put all the files.

Create a Local Web Server

In order to have full control of images, a web server is required otherwise we’d be getting a tainted canvas security error. There are several ways to do this and I’ve listed below how to do it with Python.

Python

cd C:/DIRECTORY_LOCATION & py -m http.server 8000

You should be able to access the directory through http://localhost:8000

Get Your Free API Key

We’re going to be using Facesoft’s face recognition API. Quickly sign up here to access your free API key so you can get unlimited API calls with up to two requests per minute.

Once you’ve logged in, your API key will be visible in the dashboard area.

1. Setup

Create these three files:

  • index.html
  • style.css
  • verify.js

Next right click and save the files below into that directory. These image files will be the default images for uploaded and verification pics.

SAVE AS “defaultupload.png”

SAVE AS “defaultphoto.png”

2. The HTML

Layout

Copy and paste the layout code below into your “index.html” file. This gives us our framework and design with the help of bootstrap.

<!DOCTYPE html>
<html lang="en">
<head>
  <!-- Required meta tags -->
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<!-- Bootstrap CSS -->
  <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">
<!-- Style CSS -->
  <link rel="stylesheet" href="style.css">
<title>Face Verification</title>
</head>
<body>
  <!-- Page Content -->
  <div class="container">
    <div class="row">
      <div class="col-lg-12 text-center">
        <h1 class="mt-5">Face Verification</h1>
        <p class="lead">Quick and simple face verification using HTML5 and JavaScript</p>
      </div>
    </div>
    <!-- INSERT NEXT CODE HERE -->
  </div>
<!-- Verify JS -->
 <script src="verify.js"></script>
<!-- jQuery first, then Popper.js, then Bootstrap JS -->
  <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script>
  <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script>
</body>
</html>

Images, Canvases & Buttons

We’re now creating a row and having three columns for the verification photo, the video stream, and the photo taken from the video stream. Add the code below right after the row.

Check for “INSERT NEXT CODE HERE” tag in previous code.

<div class="row justify-content-md-center">
  <div class="col-lg-4 text-center">
    <p><strong>Verification Photo</strong></p>
      <!-- Canvas For Uploaded Image -->
      <canvas id="uploadCanvas" width="300"  height="300"></canvas>
      <!-- Default Canvas Image -->
      <img src="defaultupload.png" id="uploadedPhoto" alt="upload"/>
      <!-- Upload Image Input & Upload Photo Button -->
      <input type="file" name="image-upload" accept="image/png, image/jpeg">
      <button id="upload" type="button" class="btn btn-outline-primary btn-lg">Upload Photo</button>
  </div>
  <div class="col-lg-4 text-center">
    <p><strong>Video</strong></p>
    <!-- Camera -->
    <div class="camera-container">
      <video id="video" width="100%" height="300" autoplay="true">
      </video>
    </div>
    <!-- Take Photo Button -->
    <button id="capture" type="button" class="btn btn-outline-primary btn-lg">Take Photo</button>
  </div>
  <div class="col-lg-4 text-center">
    <p><strong>Photo Taken</strong></p>
  
    <!-- Canvas For Capture Taken -->
    <canvas id="captureCanvas" width="300"  height="300"></canvas>
    <!-- Default Canvas Image -->
    <img src="defaultphoto.png" id="capturedPhoto" alt="capture" />
    <!-- Verify Photos Button -->
    <button id="verify" type="button" class="btn btn-outline-success btn-lg">Verify Photo</button>
  </div>
</div>
<!-- INSERT NEXT CODE HERE -->

API Response and Warnings

The code we’re going to add is to display the match result, score percentage, errors and warnings. Right under the last code we added, add the code below.

<div class="row">
  <div class="col-lg-12 text-center">
    <!-- API Match Result & API Percentage Score -->
    <h2 id="match" class="mt-5"></h2>
    <p id="score" class="lead"></p>
  </div>
  <div class="col-lg-12 text-center">
    <!-- Error & Warning Alerts -->
    <div class="alert alert-danger" id="errorAlert"></div>
    <div class="alert alert-warning" id="warningAlert"></div>
  </div>
</div>

3. The CSS

Add the code below to your style.css file.

.camera-container {
  max-width: 100%;
  border: 1px solid black;
}
.verification-image {
  width: 300px;
  height: auto;
  max-width: 100%;
}
.btn {
  margin-top: 10px;
}
#captureCanvas, #uploadCanvas {
  display: none;
}
input[name="image-upload"] {
  display: none;
}
#errorAlert, #warningAlert {
  display: none;
}

We’ve set the image upload input to display none as we’ll be triggering it using the upload button. Also, the IDs for the canvases have been set to display none so the default images are initially displayed.

Here is what your screen should look like:

4. The JavaScript

document.addEventListener("DOMContentLoaded", function() {
});

In the verify.js file we want to start off by adding an event listener that will run after the page loads. Every code we enter should be inside this function.

Variables

var video = document.getElementById('video'), 
captureCanvas = document.getElementById('captureCanvas'), 
uploadCanvas = document.getElementById('uploadCanvas'), 
captureContext = captureCanvas.getContext('2d'),
uploadContext = uploadCanvas.getContext('2d'),
uploadedPhoto = document.getElementById('uploadedPhoto'),
capturedPhoto = document.getElementById('capturedPhoto'),
imageUploadInput = document.querySelector('[name="image-upload"]'),
apiKey = 'INSERT_YOUR_FACESOFT_API_KEY',
errorAlert = document.getElementById('errorAlert'), AlertwarningAlert = document.getElementById('warningAlert'),
matchText = document.getElementById('match'),
scoreText = document.getElementById('score');

The variables are:

  • IDs for the video element, canvases, photos & API response
  • Selector for image input
  • Canvas contexts
  • API key

Video Stream

Here is a very simple code to access your webcam/camera and stream it into the video element. Add underneath variables.

// Stream Camera To Video Element
if(navigator.mediaDevices.getUserMedia){
  navigator.mediaDevices.getUserMedia({ video: true })
  .then(function(stream) {
    video.srcObject = stream;
  }).catch(function(error) {
    console.log(error)
  })
}

If you refresh your page, this is what you’ll see:

:D

Function 1: Set Photo To Canvas

// Set Photo To Canvas Function
function setImageToCanvas(image, id, canvas, context, width=image.width, height=image.height) {
  var ratio = width / height;
  var newWidth = canvas.width;
  var newHeight = newWidth / ratio;
  if (newHeight > canvas.height) {
    newHeight = canvas.height;
    newWidth = newHeight * ratio;
  }
  context.clearRect(0, 0, canvas.width, canvas.height);
  context.drawImage(image, 0, 0, newWidth, newHeight);
  id.setAttribute('src', canvas.toDataURL('image/png'));
}

In this function, we take in the image, id, canvas, context, width and height. We take in the width and height because to get the dimensions of the video, we must use video.videoWidth & video.videoHeight.

We also get the aspect ratio of the image so that when we assign an image, it fits right into the canvas. The rest of the code clears the canvas and draws the new image into the canvas.

Function 2: Verify If The Photos Match By Sending Them To The API.

// Facesoft Face Match API Function
function verifyImages(image1, image2, callback){
  var params = {
    image1: image1,
    image2: image2,
  }
  var xhr = new XMLHttpRequest();
  xhr.open("POST", "https://api.facesoft.io/v1/face/match");
  xhr.setRequestHeader("apikey", apiKey);
  xhr.setRequestHeader("Content-Type", "application/json;charset=UTF-8");
  xhr.onload = function(){
    callback(xhr.response);
  }
  xhr.send(JSON.stringify(params));
}

In this function we make an XMLHttpRequest to the face match API endpoint. We’ve added the API key for authorisation and content type into the header. For the body we’re passing in an object containing the two images.

Now we’re done with functions :)

Upload Photo Button Click Event

// On Upload Photo Button Click
document.getElementById('upload').addEventListener('click', function(){
  imageUploadInput.click();
})

This click event listener for the upload button triggers a click for the image input.

Image Upload Input Change Event

// On Uploaded Photo Change
imageUploadInput.addEventListener('change', function(){
  // Get File Extension
  var ext = imageUploadInput.files[0]['name'].substring(imageUploadInput.files[0]['name'].lastIndexOf('.') + 1).toLowerCase();
  // If File Exists & Image
  if (imageUploadInput.files && imageUploadInput.files[0] && (ext == "png" || ext == "jpeg" || ext == "jpg")) {
    // Set Photo To Canvas
    var reader = new FileReader();
    reader.onload = function (e) {
      var img = new Image();
      img.src = event.target.result;
      img.onload = function() {
      setImageToCanvas(img, uploadedPhoto, uploadCanvas, uploadContext);
      }
    }
    reader.readAsDataURL(imageUploadInput.files[0]);
  }
})

In this change event listener, we retrieve the file extension and perform an if statement to check if there is an image file in the input. Then we use FileReader to load the image onto the canvas.

Take Photo Button Click Event

// On Take Photo Button Click
document.getElementById('capture').addEventListener('click', function(){
  setImageToCanvas(video, capturedPhoto, captureCanvas, captureContext, video.videoWidth, video.videoHeight);
})

This event listener now executes the set image to canvas to capture a still frame from the video and assign it into a canvas.

Verify Photo Button Click Event

// On Verify Photo Button Click
document.getElementById('verify').addEventListener('click', function(){
  // Remove Results & Alerts
  errorAlert.style.display = "none";
  warningAlert.style.display = "none";
  matchText.innerHTML = "";
  scoreText.innerHTML = "";
  // Get Base64
  var image1 = captureCanvas.toDataURL().split(',')[1];
  var image2 = uploadCanvas.toDataURL().split(',')[1]; 
  // Verify if images are of the same person
  verifyImages(image1, image2, function(response){
    if(response){
      var obj = JSON.parse(response);
      
      // If Warning Message
     
      if(obj.message){
        errorAlert.style.display = "none";
        warningAlert.style.display = "block";
        warningAlert.innerHTML = obj.message;
        matchText.innerHTML = "";
        scoreText.innerHTML = "";
      }
      // If Error
      else if(obj.error){
        errorAlert.style.display = "block";
        errorAlert.innerHTML = obj.error;
        warningAlert.style.display = "none";
        matchText.innerHTML = "";
        scoreText.innerHTML = "";
      }
      // If Valid
      else{
        errorAlert.style.display = "none";
        warningAlert.style.display = "none";
        matchText.innerHTML = obj.match;
        scoreText.innerHTML = (obj.score*100).toFixed(2)+"% Score";
      }
    }
  })
})

In this event, we first hide the error/warning alerts and remove any text from the match result and score percentage.

We then get the base64 of the images from the canvases and use the split method to only get the part without “ data:image/png;base64” so the API won’t return an error.

Lastly, we call the verify images function to send the data to the API and our response will be an object either containing the results, an error, or a message.

Final Code

// Set Default Images For Uploaded & Captured Photo
setImageToCanvas(uploadedPhoto, uploadedPhoto, uploadCanvas, uploadContext);
setImageToCanvas(capturedPhoto, capturedPhoto, captureCanvas, captureContext);

This will allow us to verify the default images by assigning them to their canvas.

5. The Rock vs Dwayne Johnson

If you click verify we shall now see if the API can tell the difference between The Rock at a young age and Dwayne Johnson we usually see in films…

The API correctly identifies the 2 as the same with a 96.53% match score!

Upload, Capture, Verify

In Closing

You should now be able to think of better, secure and more complex ways to implement face matching such as logging in, 2FA, authorisation, payments etc.

Accuracy & Reliability?

The results for the best face recognition algorithm in the world came out. The algorithm we’re using came in the top 10 beating Toshiba, Microsoft & VisionLabs, and came 2nd worldwide in the wild image test (detection at difficult angles).

#javascript #html5

CSS Boss

CSS Boss

1606912089

How to create a calculator using javascript - Pure JS tutorials |Web Tutorials

In this video I will tell you How to create a calculator using javascript very easily.

#how to build a simple calculator in javascript #how to create simple calculator using javascript #javascript calculator tutorial #javascript birthday calculator #calculator using javascript and html