Joseph  Norton

Joseph Norton

1558150664

Build an Emotion-Based Dog Filter in Python 3 using Computer Vision

In this tutorial, you built a face detector and dog filter in Python 3 using Computer Vision and employed Machine Learning models to apply masks based on detected emotions.

Introduction

Computer vision is a subfield of computer science that aims to extract a higher-order understanding from images and videos. This field includes tasks such as object detection, image restoration (matrix completion), and optical flow. Computer vision powers technologies such as self-driving car prototypes, employee-less grocery stores, fun Snapchat filters, and your mobile device’s face authenticator.

In this tutorial, you will explore computer vision as you use pre-trained models to build a Snapchat-esque dog filter. For those unfamiliar with Snapchat, this filter will detect your face and then superimpose a dog mask on it. You will then train a face-emotion classifier so that the filter can pick dog masks based on emotion, such as a corgi for happy or a pug for sad. Along the way, you will also explore related concepts in both ordinary least squares and computer vision, which will expose you to the fundamentals of machine learning.

emotion recognition python

As you work through the tutorial, you’ll use OpenCV, a computer-vision library, numpy for linear algebra utilities, and matplotlib for plotting. You’ll also apply the following concepts as you build a computer-vision application:

  • Ordinary least squares as a regression and classification technique.
  • The basics of stochastic gradient neural networks.

While not necessary to complete this tutorial, you’ll find it easier to understand some of the more detailed explanations if you’re familiar with these mathematical concepts:

  • Fundamental linear algebra concepts: scalars, vectors, and matrices.
  • Fundamental calculus: how to take a derivative.

You can find the complete code for this tutorial at https://github.com/do-community/emotion-based-dog-filter.

Let’s get started.

Prerequisites

To complete this tutorial, you will need the following:

Step 1 — Creating The Project and Installing Dependencies

Let’s create a workspace for this project and install the dependencies we’ll need. We’ll call our workspace DogFilter:

mkdir ~/DogFilter

Navigate to the DogFilter directory:

cd ~/DogFilter

Then create a new Python virtual environment for the project:

python3 -m venv dogfilter

Activate your environment.

source dogfilter/bin/activate

The prompt changes, indicating the environment is active. Now install PyTorch, a deep-learning framework for Python that we’ll use in this tutorial. The installation process depends on which operating system you’re using.

On macOS, install Pytorch with the following command:

python -m pip install torch==0.4.1 torchvision==0.2.1

On Linux, use the following commands:

pip install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp35-cp35m-linux_x86_64.whl 
pip install torchvision

And for Windows, install Pytorch with these commands:

pip install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp35-cp35m-win_amd64.whl 
pip install torchvision

Now install prepackaged binaries for OpenCV and numpy, which are computer vision and linear algebra libraries, respectively. The former offers utilities such as image rotations, and the latter offers linear algebra utilities such as a matrix inversion.

python -m pip install opencv-python==3.4.3.18 numpy==1.14.5

Finally, create a directory for our assets, which will hold the images we’ll use in this tutorial:

mkdir assets

With the dependencies installed, let’s build the first version of our filter: a face detector.

Step 2 — Building a Face Detector

Our first objective is to detect all faces in an image. We’ll create a script that accepts a single image and outputs an annotated image with the faces outlined with boxes.

Fortunately, instead of writing our own face detection logic, we can use pre-trained models. We’ll set up a model and then load pre-trained parameters. OpenCV makes this easy by providing both.

OpenCV provides the model parameters in its source code. but we need the absolute path to our locally-installed OpenCV to use these parameters. Since that absolute path may vary, we’ll download our own copy instead and place it in the assets folder:

wget -O assets/haarcascade_frontalface_default.xml https://github.com/opencv/opencv/raw/master/data/haarcascades/haarcascade_frontalface_default.xml

The -O option specifies the destination as assets/haarcascade_frontalface_default.xml. The second argument is the source URL.

We’ll detect all faces in the following image from Pexels (CC0, link to original image).

face recognition python machine learning

First, download the image. The following command saves the downloaded image as children.png in the assets folder:

wget -O assets/children.png https://assets.digitalocean.com/articles/python3_dogfilter/CfoBWbF.png

To check that the detection algorithm works, we will run it on an individual image and save the resulting annotated image to disk. Create an outputs folder for these annotated results.

mkdir outputs

Now create a Python script for the face detector. Create the file step_1_face_detect using nano or your favorite text editor:

nano step_2_face_detect.py

Add the following code to the file. This code imports OpenCV, which contains the image utilities and face classifier. The rest of the code is typical Python program boilerplate.

step_2_face_detect.py

"""Test for face detection"""

import cv2

def main():
    pass

if __name__ == '__main__':
    main()

Now replace pass in the main function with this code which initializes a face classifier using the OpenCV parameters you downloaded to your assets folder:

step_2_face_detect.py

def main():
    # initialize front face classifier
    cascade = cv2.CascadeClassifier("assets/haarcascade_frontalface_default.xml")

Next, add this line to load the image children.png.

step_2_face_detect.py

    frame = cv2.imread('assets/children.png')

Then add this code to convert the image to black and white, as the classifier was trained on black-and-white images. To accomplish this, we convert to grayscale and then discretize the histogram:

step_2_face_detect.py

    # Convert to black-and-white
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    blackwhite = cv2.equalizeHist(gray)

Then use OpenCV’s [detectMultiScale](https://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html#cascadeclassifier-detectmultiscale) function to detect all faces in the image.

step_2_face_detect.py

    rects = cascade.detectMultiScale(
        blackwhite, scaleFactor=1.3, minNeighbors=4, minSize=(30, 30),
        flags=cv2.CASCADE_SCALE_IMAGE)

  • scaleFactor specifies how much the image is reduced along each dimension.
  • minNeighbors denotes how many neighboring rectangles a candidate rectangle needs to be retained.
  • minSize is the minimum allowable detected object size. Objects smaller than this are discarded.

The return type is a list of tuples, where each tuple has four numbers denoting the minimum x, minimum y, width, and height of the rectangle in that order.

Iterate over all detected objects and draw them on the image in green using [cv2.rectangle](https://docs.opencv.org/2.4/modules/core/doc/drawing_functions.html#rectangle):

step_2_face_detect.py

    for x, y, w, h in rects:
        cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

  • The second and third arguments are opposing corners of the rectangle.
  • The fourth argument is the color to use. (0, 255, 0) corresponds to green for our RGB color space.
  • The last argument denotes the width of our line.

Finally, write the image with bounding boxes into a new file at outputs/children_detected.png:

step_2_face_detect.py

    cv2.imwrite('outputs/children_detected.png', frame)

Your completed script should look like this:

step_2_face_detect.py

"""Tests face detection for a static image."""  

import cv2  

def main():  

    # initialize front face classifier  
    cascade = cv2.CascadeClassifier(  
        "assets/haarcascade_frontalface_default.xml")  

    frame = cv2.imread('assets/children.png')  

    # Convert to black-and-white  
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)  
    blackwhite = cv2.equalizeHist(gray)  

    rects = cascade.detectMultiScale(  
        blackwhite, scaleFactor=1.3, minNeighbors=4, minSize=(30, 30),  
    flags=cv2.CASCADE_SCALE_IMAGE)  

    for x, y, w, h in rects:  
        cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)  

    cv2.imwrite('outputs/children_detected.png', frame)  

if __name__ == '__main__':  
    main()

Save the file and exit your editor. Then run the script:

python step_2_face_detect.py

Open outputs/children_detected.png. You’ll see the following image that shows the faces outlined with boxes:

face detection lbp opencv

At this point, you have a working face detector. It accepts an image as input and draws bounding boxes around all faces in the image, outputting the annotated image. Now let’s apply this same detection to a live camera feed.

Step 3 — Linking the Camera Feed

The next objective is to link the computer’s camera to the face detector. Instead of detecting faces in a static image, you’ll detect all faces from your computer’s camera. You will collect camera input, detect and annotate all faces, and then display the annotated image back to the user. You’ll continue from the script in Step 2, so start by duplicating that script:

cp step_2_face_detect.py step_3_camera_face_detect.py

Then open the new script in your editor:

nano step_3_camera_face_detect.py

You will update the main function by using some elements from this test script from the official OpenCV documentation. Start by initializing a VideoCapture object that is set to capture live feed from your computer’s camera. Place this at the start of the main function, before the other code in the function:

step_3_camera_face_detect.py

def main():
    cap = cv2.VideoCapture(0)
    ...

Starting from the line defining frame, indent all of your existing code, placing all of the code in a while loop.

step_3_camera_face_detect.py

    while True:
        frame = cv2.imread('assets/children.png')
        ...
        for x, y, w, h in rects:  
            cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)  

        cv2.imwrite('outputs/children_detected.png', frame)

Replace the line defining frame at the start of the while loop. Instead of reading from an image on disk, you’re now reading from the camera:

step_3_camera_face_detect.py

    while True:
        # frame = cv2.imread('assets/children.png') # DELETE ME
        # Capture frame-by-frame
        ret, frame = cap.read()

Replace the line cv2.imwrite(...) at the end of the while loop. Instead of writing an image to disk, you’ll display the annotated image back to the user’s screen:

step_3_camera_face_detect.py

      cv2.imwrite('outputs/children_detected.png', frame)  # DELETE ME
      # Display the resulting frame
      cv2.imshow('frame', frame)

Also, add some code to watch for keyboard input so you can stop the program. Check if the user hits the q character and, if so, quit the application. Right after cv2.imshow(...) add the following:

step_3_camera_face_detect.py

...
        cv2.imshow('frame', frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
...

The line cv2.waitkey(1) halts the program for 1 millisecond so that the captured image can be displayed back to the user.

Finally, release the capture and close all windows. Place this outside of the while loop to end the main function.

step_3_camera_face_detect.py

...

    while True:
    ...

    cap.release()
    cv2.destroyAllWindows()

Your script should look like the following:

step_3_camera_face_detect.py

"""Test for face detection on video camera.

Move your face around and a green box will identify your face.
With the test frame in focus, hit `q` to exit.
Note that typing `q` into your terminal will do nothing.
"""

import cv2

def main():
    cap = cv2.VideoCapture(0)

    # initialize front face classifier
    cascade = cv2.CascadeClassifier(
        "assets/haarcascade_frontalface_default.xml")

    while True:
        # Capture frame-by-frame
        ret, frame = cap.read()

        # Convert to black-and-white
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        blackwhite = cv2.equalizeHist(gray)

        # Detect faces
        rects = cascade.detectMultiScale(
            blackwhite, scaleFactor=1.3, minNeighbors=4, minSize=(30, 30),
            flags=cv2.CASCADE_SCALE_IMAGE)

        # Add all bounding boxes to the image
        for x, y, w, h in rects:
            cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

        # Display the resulting frame
        cv2.imshow('frame', frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    # When everything done, release the capture
    cap.release()
    cv2.destroyAllWindows()

if __name__ == '__main__':
    main()

Save the file and exit your editor.

Now run the test script.

python step_3_camera_face_detect.py

This activates your camera and opens a window displaying your camera’s feed. Your face will be boxed by a green square in real time:

face detection python

Note: If you find that you have to hold very still for things to work, the lighting in the room may not be adequate. Try moving to a brightly lit room where you and your background have high constrast. Also, avoid bright lights near your head. For example, if you have your back to the sun, this process might not work very well.

Our next objective is to take the detected faces and superimpose dog masks on each one.

Step 4 — Building the Dog Filter

Before we build the filter itself, let’s explore how images are represented numerically. This will give you the background needed to modify images and ultimately apply a dog filter.

Let’s look at an example. We can construct a black-and-white image using numbers, where 0 corresponds to black and 1 corresponds to white.

Focus on the dividing line between 1s and 0s. What shape do you see?

0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0
0 0 0 1 1 1 0 0 0
0 0 1 1 1 1 1 0 0
0 0 0 1 1 1 0 0 0
0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0

The image is a diamond. If save this matrix of values as an image. This gives us the following picture:

computer vision machine learning

We can use any value between 0 and 1, such as 0.1, 0.26, or 0.74391. Numbers closer to 0 are darker and numbers closer to 1 are lighter. This allows us to represent white, black, and any shade of gray. This is great news for us because we can now construct any grayscale image using 0, 1, and any value in between. Consider the following, for example. Can you tell what it is? Again, each number corresponds to the color of a pixel.

1  1  1  1  1  1  1  1  1  1  1  1
1  1  1  1  0  0  0  0  1  1  1  1
1  1  0  0 .4 .4 .4 .4  0  0  1  1
1  0 .4 .4 .5 .4 .4 .4 .4 .4  0  1
1  0 .4 .5 .5 .5 .4 .4 .4 .4  0  1
0 .4 .4 .4 .5 .4 .4 .4 .4 .4 .4  0
0 .4 .4 .4 .4  0  0 .4 .4 .4 .4  0
0  0 .4 .4  0  1 .7  0 .4 .4  0  0
0  1  0  0  0 .7 .7  0  0  0  1  0
1  0  1  1  1  0  0 .7 .7 .4  0  1
1  0 .7  1  1  1 .7 .7 .7 .7  0  1
1  1  0  0 .7 .7 .7 .7  0  0  1  1
1  1  1  1  0  0  0  0  1  1  1  1
1  1  1  1  1  1  1  1  1  1  1  1

Re-rendered as an image, you can now tell that this is, in fact, a Poké Ball:

computer vision stanford

You’ve now seen how black-and-white and grayscale images are represented numerically. To introduce color, we need a way to encode more information. An image has its height and width expressed as h x w.

computer vision stanford

In the current grayscale representation, each pixel is one value between 0 and 1. We can equivalently say our image has dimensions h x w x 1. In other words, every (x, y) position in our image has just one value.

computer vision application

For a color representation, we represent the color of each pixel using three values between 0 and 1. One number corresponds to the “degree of red,” one to the “degree of green,” and the last to the “degree of blue.” We call this the RGB color space. This means that for every (x, y) position in our image, we have three values (r, g, b). As a result, our image is now h x w x 3:

computer vision tutorial

Here, each number ranges from 0 to 255 instead of 0 to 1, but the idea is the same. Different combinations of numbers correspond to different colors, such as dark purple (102, 0, 204) or bright orange (255, 153, 51). The takeaways are as follows:

  1. Each image will be represented as a box of numbers that has three dimensions: height, width, and color channels. Manipulating this box of numbers directly is equivalent to manipulating the image.
  2. We can also flatten this box to become just a list of numbers. In this way, our image becomes a vector. Later on, we will refer to images as vectors.

Now that you understand how images are represented numerically, you are well-equipped to begin applying dog masks to faces. To apply a dog mask, you will replace values in the child image with non-white dog mask pixels. To start, you will work with a single image. Download this crop of a face from the image you used in Step 2.

wget -O assets/child.png https://assets.digitalocean.com/articles/python3_dogfilter/alXjNK1.png

computer vision tutorial

Additionally, download the following dog mask. The dog masks used in this tutorial are my own drawings, now released to the public domain under a CC0 License.

How To Apply Computer Vision to Build an Emotion-Based Dog Filter in Python 3

Download this with wget:

wget -O assets/dog.png https://assets.digitalocean.com/articles/python3_dogfilter/ED32BCs.png

Create a new file called step_4_dog_mask_simple.py which will hold the code for the script that applies the dog mask to faces:

nano step_4_dog_mask_simple.py

Add the following boilerplate for the Python script and import the OpenCV and numpy libraries:

step_4_dog_mask_simple.py

"""Test for adding dog mask"""

import cv2
import numpy as np

def main():
    pass

if __name__ == '__main__':
    main()

Replace pass in the main function with these two lines which load the original image and the dog mask into memory.

step_4_dog_mask_simple.py

...
def main():
    face = cv2.imread('assets/child.png')
    mask = cv2.imread('assets/dog.png')

Next, fit the dog mask to the child. The logic is more complicated than what we’ve done previously, so we will create a new function called apply_mask to modularize our code. Directly after the two lines that load the images, add this line which invokes the apply_mask function:

step_4_dog_mask_simple.py

...
    face_with_mask = apply_mask(face, mask)

Create a new function called apply_mask and place it above the main function:

step_4_dog_mask_simple.py

...
def apply_mask(face: np.array, mask: np.array) -> np.array:
    """Add the mask to the provided face, and return the face with mask."""
    pass

def main():
...

At this point, your file should look like this:

step_4_dog_mask_simple.py

"""Test for adding dog mask"""

import cv2
import numpy as np

def apply_mask(face: np.array, mask: np.array) -> np.array:
    """Add the mask to the provided face, and return the face with mask."""
    pass

def main():
    face = cv2.imread('assets/child.png')
    mask = cv2.imread('assets/dog.png')
    face_with_mask = apply_mask(face, mask)

if __name__ == '__main__':
    main()

Let’s build out the apply_mask function. Our goal is to apply the mask to the child’s face. However, we need to maintain the aspect ratio for our dog mask. To do so, we need to explicitly compute our dog mask’s final dimensions. Inside the apply_mask function, replace pass with these two lines which extract the height and width of both images:

step_4_dog_mask_simple.py

...
    mask_h, mask_w, _ = mask.shape
    face_h, face_w, _ = face.shape

Next, determine which dimension needs to be “shrunk more.” To be precise, we need the tighter of the two constraints. Add this line to the apply_mask function:

step_4_dog_mask_simple.py

...

    # Resize the mask to fit on face
    factor = min(face_h / mask_h, face_w / mask_w)

Then compute the new shape by adding this code to the function:

step_4_dog_mask_simple.py

...
    new_mask_w = int(factor * mask_w)
    new_mask_h = int(factor * mask_h)
    new_mask_shape = (new_mask_w, new_mask_h)

Here we cast the numbers to integers, as the resize function needs integral dimensions.

Now add this code to resize the dog mask to the new shape:

step_4_dog_mask_simple.py

...

    # Add mask to face - ensure mask is centered
    resized_mask = cv2.resize(mask, new_mask_shape)

Finally, write the image to disk so you can double-check that your resized dog mask is correct after you run the script:

step_4_dog_mask_simple.py

    cv2.imwrite('outputs/resized_dog.png', resized_mask)

The completed script should look like this:

step_4_dog_mask_simple.py

"""Test for adding dog mask"""
import cv2
import numpy as np

def apply_mask(face: np.array, mask: np.array) -> np.array:
    """Add the mask to the provided face, and return the face with mask."""
    mask_h, mask_w, _ = mask.shape
    face_h, face_w, _ = face.shape

    # Resize the mask to fit on face
    factor = min(face_h / mask_h, face_w / mask_w)
    new_mask_w = int(factor * mask_w)
    new_mask_h = int(factor * mask_h)
    new_mask_shape = (new_mask_w, new_mask_h)

    # Add mask to face - ensure mask is centered
    resized_mask = cv2.resize(mask, new_mask_shape)
    cv2.imwrite('outputs/resized_dog.png', resized_mask)

def main():
    face = cv2.imread('assets/child.png')
    mask = cv2.imread('assets/dog.png')
    face_with_mask = apply_mask(face, mask)

if __name__ == '__main__':
    main()


Save the file and exit your editor. Run the new script:

python step_4_dog_mask_simple.py

Open the image at outputs/resized_dog.png to double-check the mask was resized correctly. It will match the dog mask shown earlier in this section.

Now add the dog mask to the child. Open the step_4_dog_mask_simple.py file again and return to the apply_mask function:

nano step_4_dog_mask_simple.py

First, remove the line of code that writes the resized mask from the apply_mask function since you no longer need it:

    cv2.imwrite('outputs/resized_dog.png', resized_mask)  # delete this line
    ...

In its place, apply your knowledge of image representation from the start of this section to modify the image. Start by making a copy of the child image. Add this line to the apply_mask function:

step_4_dog_mask_simple.py

...
    face_with_mask = face.copy()

Next, find all positions where the dog mask is not white or near white. To do this, check if the pixel value is less than 250 across all color channels, as we’d expect a near-white pixel to be near [255, 255, 255]. Add this code:

step_4_dog_mask_simple.py

...
    non_white_pixels = (resized_mask < 250).all(axis=2)

At this point, the dog image is, at most, as large as the child image. We want to center the dog image on the face, so compute the offset needed to center the dog image by adding this code to apply_mask:

step_4_dog_mask_simple.py

...
    off_h = int((face_h - new_mask_h) / 2)  
    off_w = int((face_w - new_mask_w) / 2)

Copy all non-white pixels from the dog image into the child image. Since the child image may be larger than the dog image, we need to take a subset of the child image:

step_4_dog_mask_simple.py

    face_with_mask[off_h: off_h+new_mask_h, off_w: off_w+new_mask_w][non_white_pixels] = \
            resized_mask[non_white_pixels]

Then return the result:

step_4_dog_mask_simple.py

    return face_with_mask

In the main function, add this code to write the result of the apply_mask function to an output image so you can manually double-check the result:

step_4_dog_mask_simple.py

...
    face_with_mask = apply_mask(face, mask)
    cv2.imwrite('outputs/child_with_dog_mask.png', face_with_mask)

Your completed script will look like the following:

step_4_dog_mask_simple.py

"""Test for adding dog mask"""

import cv2
import numpy as np

def apply_mask(face: np.array, mask: np.array) -> np.array:
    """Add the mask to the provided face, and return the face with mask."""
    mask_h, mask_w, _ = mask.shape
    face_h, face_w, _ = face.shape

    # Resize the mask to fit on face
    factor = min(face_h / mask_h, face_w / mask_w)
    new_mask_w = int(factor * mask_w)
    new_mask_h = int(factor * mask_h)
    new_mask_shape = (new_mask_w, new_mask_h)
    resized_mask = cv2.resize(mask, new_mask_shape)

    # Add mask to face - ensure mask is centered
    face_with_mask = face.copy()
    non_white_pixels = (resized_mask < 250).all(axis=2)
    off_h = int((face_h - new_mask_h) / 2)  
    off_w = int((face_w - new_mask_w) / 2)
    face_with_mask[off_h: off_h+new_mask_h, off_w: off_w+new_mask_w][non_white_pixels] = \
         resized_mask[non_white_pixels]

    return face_with_mask

def main():
    face = cv2.imread('assets/child.png')
    mask = cv2.imread('assets/dog.png')
    face_with_mask = apply_mask(face, mask)
    cv2.imwrite('outputs/child_with_dog_mask.png', face_with_mask)

if __name__ == '__main__':
    main()

Save the script and run it:

python step_4_dog_mask_simple.py

You’ll have the following picture of a child with a dog mask in outputs/child_with_dog_mask.png:

Build an Emotion-Based Dog Filter in Python 3 with Computer Vision

You now have a utility that applies dog masks to faces. Now let’s use what you’ve built to add the dog mask in real time.

We’ll pick up from where we left off in Step 3. Copy step_3_camera_face_detect.py to step_4_dog_mask.py.

cp step_3_camera_face_detect.py step_4_dog_mask.py

Open your new script.

nano step_4_dog_mask.py

First, import the NumPy library at the top of the script:

step_4_dog_mask.py

import numpy as np
...

Then add the apply_mask function from your previous work into this new file above the main function:

step_4_dog_mask.py

def apply_mask(face: np.array, mask: np.array) -> np.array:
    """Add the mask to the provided face, and return the face with mask."""
    mask_h, mask_w, _ = mask.shape
    face_h, face_w, _ = face.shape

    # Resize the mask to fit on face
    factor = min(face_h / mask_h, face_w / mask_w)
    new_mask_w = int(factor * mask_w)
    new_mask_h = int(factor * mask_h)
    new_mask_shape = (new_mask_w, new_mask_h)
    resized_mask = cv2.resize(mask, new_mask_shape)

    # Add mask to face - ensure mask is centered
    face_with_mask = face.copy()
    non_white_pixels = (resized_mask < 250).all(axis=2)
    off_h = int((face_h - new_mask_h) / 2)  
    off_w = int((face_w - new_mask_w) / 2)
    face_with_mask[off_h: off_h+new_mask_h, off_w: off_w+new_mask_w][non_white_pixels] = \
         resized_mask[non_white_pixels]

    return face_with_mask
...

Second, locate this line in the main function:

step_4_dog_mask.py

    cap = cv2.VideoCapture(0)

Add this code after that line to load the dog mask:

step_4_dog_mask.py

    cap = cv2.VideoCapture(0)

    # load mask
    mask = cv2.imread('assets/dog.png')
    ...

Next, in the while loop, locate this line:

step_4_dog_mask.py

        ret, frame = cap.read()

Add this line after it to extract the image’s height and width:

step_4_dog_mask.py

        ret, frame = cap.read()
        frame_h, frame_w, _ = frame.shape
        ...

Next, delete the line in main that draws bounding boxes. You’ll find this line in the for loop that iterates over detected faces:

step_4_dog_mask.py

        for x, y, w, h in rects:
        ...
            cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) # DELETE ME
        ...

In its place, add this code which crops the frame. For aesthetic purposes, we crop an area slightly larger than the face.

step_4_dog_mask.py

        for x, y, w, h in rects:
            # crop a frame slightly larger than the face
            y0, y1 = int(y - 0.25*h), int(y + 0.75*h)
            x0, x1 = x, x + w

Introduce a check in case the detected face is too close to the edge.

step_4_dog_mask.py

            # give up if the cropped frame would be out-of-bounds
            if x0 < 0 or y0 < 0 or x1 > frame_w or y1 > frame_h:
                continue

Finally, insert the face with a mask into the image.

step_4_dog_mask.py

            # apply mask
            frame[y0: y1, x0: x1] = apply_mask(frame[y0: y1, x0: x1], mask)

Verify that your script looks like this:

step_4_dog_mask.py

"""Real-time dog filter

Move your face around and a dog filter will be applied to your face if it is not out-of-bounds. With the test frame in focus, hit `q` to exit. Note that typing `q` into your terminal will do nothing.
"""

import numpy as np
import cv2

def apply_mask(face: np.array, mask: np.array) -> np.array:
    """Add the mask to the provided face, and return the face with mask."""
    mask_h, mask_w, _ = mask.shape
    face_h, face_w, _ = face.shape

    # Resize the mask to fit on face
    factor = min(face_h / mask_h, face_w / mask_w)
    new_mask_w = int(factor * mask_w)
    new_mask_h = int(factor * mask_h)
    new_mask_shape = (new_mask_w, new_mask_h)
    resized_mask = cv2.resize(mask, new_mask_shape)

    # Add mask to face - ensure mask is centered
    face_with_mask = face.copy()
    non_white_pixels = (resized_mask < 250).all(axis=2)
    off_h = int((face_h - new_mask_h) / 2)
    off_w = int((face_w - new_mask_w) / 2)
    face_with_mask[off_h: off_h+new_mask_h, off_w: off_w+new_mask_w][non_white_pixels] = \
         resized_mask[non_white_pixels]

    return face_with_mask

def main():
    cap = cv2.VideoCapture(0)

    # load mask
    mask = cv2.imread('assets/dog.png')

    # initialize front face classifier
    cascade = cv2.CascadeClassifier("assets/haarcascade_frontalface_default.xml")

    while(True):
        # Capture frame-by-frame
        ret, frame = cap.read()
        frame_h, frame_w, _ = frame.shape

        # Convert to black-and-white
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        blackwhite = cv2.equalizeHist(gray)

        # Detect faces
        rects = cascade.detectMultiScale(
            blackwhite, scaleFactor=1.3, minNeighbors=4, minSize=(30, 30),
            flags=cv2.CASCADE_SCALE_IMAGE)

        # Add mask to faces
        for x, y, w, h in rects:
            # crop a frame slightly larger than the face
            y0, y1 = int(y - 0.25*h), int(y + 0.75*h)
            x0, x1 = x, x + w

            # give up if the cropped frame would be out-of-bounds
            if x0 < 0 or y0 < 0 or x1 > frame_w or y1 > frame_h:
                continue

            # apply mask
            frame[y0: y1, x0: x1] = apply_mask(frame[y0: y1, x0: x1], mask)

        # Display the resulting frame
        cv2.imshow('frame', frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    # When everything done, release the capture
    cap.release()
    cv2.destroyAllWindows()

if __name__ == '__main__':
    main()

Save the file and exit your editor. Then run the script.

python step_4_dog_mask.py

You now have a real-time dog filter running. The script will also work with multiple faces in the picture, so you can get your friends together for some automatic dog-ification.

computer vision tutorial

This concludes our first primary objective in this tutorial, which is to create a Snapchat-esque dog filter. Now let’s use facial expression to determine the dog mask applied to a face.

Step 5 — Build a Basic Face Emotion Classifier using Least Squares

In this section you’ll create an emotion classifier to apply different masks based on displayed emotions. If you smile, the filter will apply a corgi mask. If you frown, it will apply a pug mask. Along the way, you’ll explore the least-squares framework, which is fundamental to understanding and discussing machine learning concepts.

To understand how to process our data and produce predictions, we’ll first briefly explore machine learning models.

We need to ask two questions for each model that we consider. For now, these two questions will be sufficient to differentiate between models:

  1. Input: What information is the model given?
  2. Output: What is the model trying to predict?

At a high-level, the goal is to develop a model for emotion classification. The model is:

  1. Input: given images of faces.
  2. Output: predicts the corresponding emotion.
model: face -> emotion

The approach we’ll use is least squares; we take a set of points, and we find a line of best fit. The line of best fit, shown in the following image, is our model.

computer vision application

Consider the input and output for our line:

  1. Input: given x coordinates.
  2. Output: predicts the corresponding $y$ coordinate.
least squares line: x -> y

Our input x must represent faces and our output y must represent emotion, in order for us to use least squares for emotion classification:

  • x -> face: Instead of using one number for x, we will use a vector of values for x. Thus, x can represent images of faces. The article Ordinary Least Squares explains why you can use a vector of values for x.
  • y -> emotion: Each emotion will correspond to a number. For example, “angry” is 0, “sad” is 1, and “happy” is 2. In this way, y can represent emotions. However, our line is not constrained to output the y values 0, 1, and 2. It has an infinite number of possible y values–it could be 1.2, 3.5, or 10003.42. How do we translate those y values to integers corresponding to classes? See the article One-Hot Encoding for more detail and explanation.

Armed with this background knowledge, you will build a simple least-squares classifier using vectorized images and one-hot encoded labels. You’ll accomplish this in three steps:

  1. Preprocess the data: As explained at the start of this section, our samples are vectors where each vector encodes an image of a face. Our labels are integers corresponding to an emotion, and we’ll apply one-hot encoding to these labels.
  2. Specify and train the model: Use the closed-form least squares solution, w^*.
  3. Run a prediction using the model: Take the argmax of Xw^* to obtain predicted emotions.

Let’s get started.

First, set up a directory to contain the data:

mkdir data

Then download the data, curated by Pierre-Luc Carrier and Aaron Courville, from a 2013 Face Emotion Classification competition on Kaggle.

wget -O data/fer2013.tar https://bitbucket.org/alvinwan/adversarial-examples-in-computer-vision-building-then-fooling/raw/babfe4651f89a398c4b3fdbdd6d7a697c5104cff/fer2013.tar

Navigate to the data directory and unpack the data.

cd data
tar -xzf fer2013.tar

Now we’ll create a script to run the least-squares model. Navigate to the root of your project:

cd ~/DogFilter

Create a new file for the script:

nano step_5_ls_simple.py

Add Python boilerplate and import the packages you will need:

step_5_ls_simple.py

"""Train emotion classifier using least squares."""

import numpy as np

def main():
    pass

if __name__ == '__main__':
    main()

Next, load the data into memory. Replace pass in your main function with the following code:

step_5_ls_simple.py

    # load data
    with np.load('data/fer2013_train.npz') as data:
        X_train, Y_train = data['X'], data['Y']

    with np.load('data/fer2013_test.npz') as data:
        X_test, Y_test = data['X'], data['Y']

Now one-hot encode the labels. To do this, construct the identity matrix with numpy and then index into this matrix using our list of labels:

step_5_ls_simple.py

    # one-hot labels
    I = np.eye(6)
    Y_oh_train, Y_oh_test = I[Y_train], I[Y_test]

Here, we use the fact that the i-th row in the identity matrix is all zero, except for the i-th entry. Thus, the i-th row is the one-hot encoding for the label of class i. Additionally, we use numpy’s advanced indexing, where [a, b, c, d][[1, 3]] = [b, d].

Computing (X^TX)^{-1} would take too long on commodity hardware, as X^TX is a 2304x2304 matrix with over four million values, so we’ll reduce this time by selecting only the first 100 features. Add this code:

step_5_ls_simple.py

...
    # select first 100 dimensions
    A_train, A_test = X_train[:, :100], X_test[:, :100]

Next, add this code to evaluate the closed-form least-squares solution:

step_5_ls_simple.py

...
    # train model
    w = np.linalg.inv(A_train.T.dot(A_train)).dot(A_train.T.dot(Y_oh_train))

Then define an evaluation function for training and validation sets. Place this before your main function:

step_5_ls_simple.py

def evaluate(A, Y, w):
    Yhat = np.argmax(A.dot(w), axis=1)
    return np.sum(Yhat == Y) / Y.shape[0]

To estimate labels, we take the inner product with each sample and get the indices of the maximum values using np.argmax. Then we compute the average number of correct classifications. This final number is your accuracy.

Finally, add this code to the end of the main function to compute the training and validation accuracy using the evaluate function you just wrote:

step_5_ls_simple.py

    # evaluate model
    ols_train_accuracy = evaluate(A_train, Y_train, w)
    print('(ols) Train Accuracy:', ols_train_accuracy)
    ols_test_accuracy = evaluate(A_test, Y_test, w)
    print('(ols) Test Accuracy:', ols_test_accuracy)

Double-check that your script matches the following:

step_5_ls_simple.py

"""Train emotion classifier using least squares."""

import numpy as np

def evaluate(A, Y, w):
    Yhat = np.argmax(A.dot(w), axis=1)
    return np.sum(Yhat == Y) / Y.shape[0]

def main():

    # load data
    with np.load('data/fer2013_train.npz') as data:
        X_train, Y_train = data['X'], data['Y']

    with np.load('data/fer2013_test.npz') as data:
        X_test, Y_test = data['X'], data['Y']

    # one-hot labels
    I = np.eye(6)
    Y_oh_train, Y_oh_test = I[Y_train], I[Y_test]

    # select first 100 dimensions
    A_train, A_test = X_train[:, :100], X_test[:, :100]

    # train model
    w = np.linalg.inv(A_train.T.dot(A_train)).dot(A_train.T.dot(Y_oh_train))

    # evaluate model
    ols_train_accuracy = evaluate(A_train, Y_train, w)
    print('(ols) Train Accuracy:', ols_train_accuracy)
    ols_test_accuracy = evaluate(A_test, Y_test, w)
    print('(ols) Test Accuracy:', ols_test_accuracy)

if __name__ == '__main__':
    main()

Save your file, exit your editor, and run the Python script.

python step_5_ls_simple.py

You’ll see the following output:

Output
(ols) Train Accuracy: 0.4748918316507146
(ols) Test Accuracy: 0.45280545359202934

Our model gives 47.5% train accuracy. We repeat this on the validation set to obtain 45.3% accuracy. For a three-way classification problem, 45.3% is reasonably above guessing, which is 33%​. This is our starting classifier for emotion detection, and in the next step, you’ll build off of this least-squares model to improve accuracy. The higher the accuracy, the more reliably your emotion-based dog filter can find the appropriate dog filter for each detected emotion.

Step 6 — Improving Accuracy by Featurizing the Inputs

We can use a more expressive model to boost accuracy. To accomplish this, we featurize our inputs.

The original image tells us that position (0, 0) is red, (1, 0) is brown, and so on. A featurized image may tell us that there is a dog to the top-left of the image, a person in the middle, etc. Featurization is powerful, but its precise definition is beyond the scope of this tutorial.

We’ll use an approximation for the radial basis function (RBF) kernel, using a random Gaussian matrix. We won’t go into detail in this tutorial. Instead, we’ll treat this as a black box that computes higher-order features for us.

We’ll continue where we left off in the previous step. Copy the previous script so you have a good starting point:

cp step_5_ls_simple.py step_6_ls_simple.py

Open the new file in your editor:

nano step_6_ls_simple.py

We’ll start by creating the featurizing random matrix. Again, we’ll use only 100 features in our new feature space.

Locate the following line, defining A_train and A_test:

step_6_ls_simple.py

    # select first 100 dimensions
    A_train, A_test = X_train[:, :100], X_test[:, :100]

Directly above this definition for A_train and A_test, add a random feature matrix:

step_6_ls_simple.py

    d = 100
    W = np.random.normal(size=(X_train.shape[1], d))
    # select first 100 dimensions
    A_train, A_test = X_train[:, :100], X_test[:, :100]  ...

Then replace the definitions for A_train and A_test. We redefine our matrices, called design matrices, using this random featurization.

step_6_ls_simple.py

    A_train, A_test = X_train.dot(W), X_test.dot(W)

Save your file and run the script.

python step_6_ls_simple.py

You’ll see the following output:

Output
(ols) Train Accuracy: 0.584174642717
(ols) Test Accuracy: 0.584425799685

This featurization now offers 58.4% train accuracy and 58.4% validation accuracy, a 13.1% improvement in validation results. We trimmed the X matrix to be 100 x 100, but the choice of 100 was arbirtary. We could also trim the X matrix to be 1000 x 1000 or 50 x 50. Say the dimension of x is d x d. We can test more values of d by re-trimming X to be d x d and recomputing a new model.

Trying more values of d, we find an additional 4.3% improvement in test accuracy to 61.7%. In the following figure, we consider the performance of our new classifier as we vary d. Intuitively, as d increases, the accuracy should also increase, as we use more and more of our original data. Rather than paint a rosy picture, however, the graph exhibits a negative trend:

computer vision machine learning

As we keep more of our data, the gap between the training and validation accuracies increases as well. This is clear evidence of overfitting, where our model is learning representations that are no longer generalizable to all data. To combat overfitting, we’ll regularize our model by penalizing complex models.

We amend our ordinary least-squares objective function with a regularization term, giving us a new objective. Our new objective function is called ridge regression and it looks like this:

min_w |Aw- y|^2 + lambda |w|^2

In this equation, lambda is a tunable hyperparameter. Plug lambda = 0 into the equation and ridge regression becomes least-squares. Plug lambda = infinity into the equation, and you’ll find the best w must now be zero, as any non-zero w incurs infinite loss. As it turns out, this objective yields a closed-form solution as well:

w^* = (A^TA + lambda I)^{-1}A^Ty

Still using the featurized samples, retrain and reevaluate the model once more.

Open step_6_ls_simple.py again in your editor:

nano step_6_ls_simple.py

This time, increase the dimensionality of the new feature space to d=1000​. Change the value of d from 100 to 1000 as shown in the following code block:

step_6_ls_simple.py

...
    d = 1000
    W = np.random.normal(size=(X_train.shape[1], d))
...

Then apply ridge regression using a regularization of lambda = 10^{10}. Replace the line defining w with the following two lines:

step_6_ls_simple.py

...
    # train model
    I = np.eye(A_train.shape[1])
    w = np.linalg.inv(A_train.T.dot(A_train) + 1e10 * I).dot(A_train.T.dot(Y_oh_train))

Then locate this block:

step_6_ls_simple.py

...
  ols_train_accuracy = evaluate(A_train, Y_train, w)
  print('(ols) Train Accuracy:', ols_train_accuracy)
  ols_test_accuracy = evaluate(A_test, Y_test, w)
  print('(ols) Test Accuracy:', ols_test_accuracy)

Replace it with the following:

step_6_ls_simple.py

...

  print('(ridge) Train Accuracy:', evaluate(A_train, Y_train, w))
  print('(ridge) Test Accuracy:', evaluate(A_test, Y_test, w))

The completed script should look like this:

step_6_ls_simple.py

"""Train emotion classifier using least squares."""

import numpy as np

def evaluate(A, Y, w):
    Yhat = np.argmax(A.dot(w), axis=1)
    return np.sum(Yhat == Y) / Y.shape[0]

def main():
    # load data
    with np.load('data/fer2013_train.npz') as data:
        X_train, Y_train = data['X'], data['Y']

    with np.load('data/fer2013_test.npz') as data:
        X_test, Y_test = data['X'], data['Y']

    # one-hot labels
    I = np.eye(6)
    Y_oh_train, Y_oh_test = I[Y_train], I[Y_test]
    d = 1000
    W = np.random.normal(size=(X_train.shape[1], d))
    # select first 100 dimensions
    A_train, A_test = X_train.dot(W), X_test.dot(W)

    # train model
    I = np.eye(A_train.shape[1])
    w = np.linalg.inv(A_train.T.dot(A_train) + 1e10 * I).dot(A_train.T.dot(Y_oh_train))

    # evaluate model
    print('(ridge) Train Accuracy:', evaluate(A_train, Y_train, w))
    print('(ridge) Test Accuracy:', evaluate(A_test, Y_test, w))

if __name__ == '__main__':
    main()

Save the file, exit your editor, and run the script:

python step_6_ls_simple.py

You’ll see the following output:

Output
(ridge) Train Accuracy: 0.651173462698
(ridge) Test Accuracy: 0.622181436812

There’s an additional improvement of 0.4% in validation accuracy to 62.2%, as train accuracy drops to 65.1%. Once again reevaluating across a number of different d, we see a smaller gap between training and validation accuracies for ridge regression. In other words, ridge regression was subject to less overfitting.

computer vision machine learning

Baseline performance for least squares, with these extra enhancements, performs reasonably well. The training and inference times, all together, take no more than 20 seconds for even the best results. In the next section, you’ll explore even more complex models.

Step 7 — Building the Face-Emotion Classifier Using a Convolutional Neural Network in PyTorch

In this section, you’ll build a second emotion classifier using neural networks instead of least squares. Again, our goal is to produce a model that accepts faces as input and outputs an emotion. Eventually, this classifier will then determine which dog mask to apply.

For a brief neural network visualization and introduction, see the article Understanding Neural Networks. Here, we will use a deep-learning library called PyTorch. There are a number of deep-learning libraries in widespread use, and each has various pros and cons. PyTorch is a particularly good place to start. To impliment this neural network classifier, we again take three steps, as we did with the least-squares classifier:

  1. Preprocess the data: Apply one-hot encoding and then apply PyTorch abstractions.
  2. Specify and train the model: Set up a neural network using PyTorch layers. Define optimization hyperparameters and run stochastic gradient descent.
  3. Run a prediction using the model: Evaluate the neural network.

Create a new file, named step_7_fer_simple.py

nano step_7_fer_simple.py

Import the necessary utilities and create a Python class that will hold your data. For data processing here, you will create the train and test datasets. To do these, implement PyTorch’s Dataset interface, which lets you load and use PyTorch’s built-in data pipeline for the face-emotion recognition dataset:

step_7_fer_simple.py

from torch.utils.data import Dataset
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import torch
import cv2
import argparse

class Fer2013Dataset(Dataset):
    """Face Emotion Recognition dataset.

    Utility for loading FER into PyTorch. Dataset curated by Pierre-Luc Carrier
    and Aaron Courville in 2013.

    Each sample is 1 x 1 x 48 x 48, and each label is a scalar.
    """
    pass

Delete the pass placeholder in the Fer2013Dataset class. In its place, add a function that will initialize our data holder:

step_7_fer_simple.py

    def __init__(self, path: str):
        """
        Args:
            path: Path to `.np` file containing sample nxd and label nx1
        """
        with np.load(path) as data:
            self._samples = data['X']
            self._labels = data['Y']
        self._samples = self._samples.reshape((-1, 1, 48, 48))

        self.X = Variable(torch.from_numpy(self._samples)).float()
        self.Y = Variable(torch.from_numpy(self._labels)).float()
...

This function starts by loading the samples and labels. Then it wraps the data in PyTorch data structures.

Directly after the __init__ function, add a __len__ function, as this is needed to implement the Dataset interface PyTorch expects:

step_7_fer_simple.py

...
    def __len__(self):
        return len(self._labels)

Finally, add a __getitem__ method, which returns a dictionary containing the sample and the label:

step_7_fer_simple.py

    def __getitem__(self, idx):
        return {'image': self._samples[idx], 'label': self._labels[idx]}

Double-check that your file looks like the following:

step_7_fer_simple.py

from torch.utils.data import Dataset
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import torch
import cv2
import argparse

class Fer2013Dataset(Dataset):
    """Face Emotion Recognition dataset.
    Utility for loading FER into PyTorch. Dataset curated by Pierre-Luc Carrier
    and Aaron Courville in 2013.
    Each sample is 1 x 1 x 48 x 48, and each label is a scalar.
    """

    def __init__(self, path: str):
        """
        Args:
            path: Path to `.np` file containing sample nxd and label nx1
        """
        with np.load(path) as data:
            self._samples = data['X']
            self._labels = data['Y']
        self._samples = self._samples.reshape((-1, 1, 48, 48))

        self.X = Variable(torch.from_numpy(self._samples)).float()
        self.Y = Variable(torch.from_numpy(self._labels)).float()

    def __len__(self):
        return len(self._labels)

    def __getitem__(self, idx):
        return {'image': self._samples[idx], 'label': self._labels[idx]}

Next, load the Fer2013Dataset dataset. Add the following code to the end of your file after the Fer2013Dataset class:

step_7_fer_simple.py

trainset = Fer2013Dataset('data/fer2013_train.npz')
trainloader = torch.utils.data.DataLoader(trainset, batch_size=32, shuffle=True)

testset = Fer2013Dataset('data/fer2013_test.npz')
testloader = torch.utils.data.DataLoader(testset, batch_size=32, shuffle=False)

This code initializes the dataset using the Fer2013Dataset class you created. Then for the train and validation sets, it wraps the dataset in a DataLoader. This translates the dataset into an iterable to use later.

As a sanity check, verify that the dataset utilities are functioning. Create a sample dataset loader using DataLoader and print the first element of that loader. Add the following to the end of your file:

step_7_fer_simple.py

if __name__ == '__main__':
    loader = torch.utils.data.DataLoader(trainset, batch_size=2, shuffle=False)
    print(next(iter(loader)))

Verify that your completed script looks like this:

step_7_fer_simple.py

from torch.utils.data import Dataset
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import torch
import cv2
import argparse

class Fer2013Dataset(Dataset):
    """Face Emotion Recognition dataset.
    Utility for loading FER into PyTorch. Dataset curated by Pierre-Luc Carrier
    and Aaron Courville in 2013.
    Each sample is 1 x 1 x 48 x 48, and each label is a scalar.
    """

    def __init__(self, path: str):
        """
        Args:
            path: Path to `.np` file containing sample nxd and label nx1
        """
        with np.load(path) as data:
            self._samples = data['X']
            self._labels = data['Y']
        self._samples = self._samples.reshape((-1, 1, 48, 48))

        self.X = Variable(torch.from_numpy(self._samples)).float()
        self.Y = Variable(torch.from_numpy(self._labels)).float()

    def __len__(self):
        return len(self._labels)

    def __getitem__(self, idx):
        return {'image': self._samples[idx], 'label': self._labels[idx]}

trainset = Fer2013Dataset('data/fer2013_train.npz')
trainloader = torch.utils.data.DataLoader(trainset, batch_size=32, shuffle=True)

testset = Fer2013Dataset('data/fer2013_test.npz')
testloader = torch.utils.data.DataLoader(testset, batch_size=32, shuffle=False)

if __name__ == '__main__':
    loader = torch.utils.data.DataLoader(trainset, batch_size=2, shuffle=False)
    print(next(iter(loader)))

Exit your editor and run the script.

python step_7_fer_simple.py

This outputs the following pair of tensors. Our data pipeline outputs two samples and two labels. This indicates that our data pipeline is up and ready to go:

Output
{'image':
(0 ,0 ,.,.) =
   24   32   36  ...   173  172  173
   25   34   29  ...   173  172  173
   26   29   25  ...   172  172  174
      ...         ⋱        ...
  159  185  157  ...   157  156  153
  136  157  187  ...   152  152  150
  145  130  161  ...   142  143  142
     ⋮

(1 ,0 ,.,.) =
   20   17   19  ...   187  176  162
   22   17   17  ...   195  180  171
   17   17   18  ...   203  193  175
      ...         ⋱        ...
    1    1    1  ...   106  115  119
    2    2    1  ...   103  111  119
    2    2    2  ...    99  107  118
[torch.LongTensor of size 2x1x48x48]
, 'label':
 1
 1
[torch.LongTensor of size 2]
}

Now that you’ve verified that the data pipeline works, return to step_7_fer_simple.py to add the neural network and optimizer. Open step_7_fer_simple.py.

nano step_7_fer_simple.py

First, delete the last three lines you added in the previous iteration:

step_7_fer_simple.py

# Delete all three lines
if __name__ == '__main__':
    loader = torch.utils.data.DataLoader(trainset, batch_size=2, shuffle=False)
    print(next(iter(loader)))

In their place, define a PyTorch neural network that includes three convolutional layers, followed by three fully connected layers. Add this to the end of your existing script:

step_7_fer_simple.py

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 6, 3)
        self.conv3 = nn.Conv2d(6, 16, 3)
        self.fc1 = nn.Linear(16 * 4 * 4, 120)
        self.fc2 = nn.Linear(120, 48)
        self.fc3 = nn.Linear(48, 3)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = self.pool(F.relu(self.conv3(x)))
        x = x.view(-1, 16 * 4 * 4)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

Now initialize the neural network, define a loss function, and define optimization hyperparameters by adding the following code to the end of the script:

step_7_fer_simple.py

net = Net().float()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

We’ll train for two epochs. For now, we define an epoch to be an iteration of training where every training sample has been used exactly once.

First, extract image and label from the dataset loader and then wrap each in a PyTorch Variable. Second, run the forward pass and then backpropagate through the loss and neural network. Add the following code to the end of your script to do that:

step_7_fer_simple.py

for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        inputs = Variable(data['image'].float())
        labels = Variable(data['label'].long())
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.data[0]
        if i % 100 == 0:
            print('[%d, %5d] loss: %.3f' % (epoch, i, running_loss / (i + 1)))

Your script should now look like this:

step_7_fer_simple.py

from torch.utils.data import Dataset
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import torch
import cv2
import argparse

class Fer2013Dataset(Dataset):
    """Face Emotion Recognition dataset.

    Utility for loading FER into PyTorch. Dataset curated by Pierre-Luc Carrier
    and Aaron Courville in 2013.

    Each sample is 1 x 1 x 48 x 48, and each label is a scalar.
    """
    def __init__(self, path: str):
        """
        Args:
            path: Path to `.np` file containing sample nxd and label nx1
        """
        with np.load(path) as data:
            self._samples = data['X']
            self._labels = data['Y']
        self._samples = self._samples.reshape((-1, 1, 48, 48))

        self.X = Variable(torch.from_numpy(self._samples)).float()
        self.Y = Variable(torch.from_numpy(self._labels)).float()

    def __len__(self):
        return len(self._labels)

    def __getitem__(self, idx):
        return {'image': self._samples[idx], 'label': self._labels[idx]}

trainset = Fer2013Dataset('data/fer2013_train.npz')
trainloader = torch.utils.data.DataLoader(trainset, batch_size=32, shuffle=True)

testset = Fer2013Dataset('data/fer2013_test.npz')
testloader = torch.utils.data.DataLoader(testset, batch_size=32, shuffle=False)

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 6, 3)
        self.conv3 = nn.Conv2d(6, 16, 3)
        self.fc1 = nn.Linear(16 * 4 * 4, 120)
        self.fc2 = nn.Linear(120, 48)
        self.fc3 = nn.Linear(48, 3)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = self.pool(F.relu(self.conv3(x)))
        x = x.view(-1, 16 * 4 * 4)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

net = Net().float()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        inputs = Variable(data['image'].float())
        labels = Variable(data['label'].long())
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.data[0]
        if i % 100 == 0:
            print('[%d, %5d] loss: %.3f' % (epoch, i, running_loss / (i + 1)))

Save the file and exit the editor once you’ve verified your code. Then, launch this proof-of-concept training:

python step_7_fer_simple.py

You’ll see output similar to the following as the neural network trains:

Output
[0,     0] loss: 1.094
[0,   100] loss: 1.049
[0,   200] loss: 1.009
[0,   300] loss: 0.963
[0,   400] loss: 0.935
[1,     0] loss: 0.760
[1,   100] loss: 0.768
[1,   200] loss: 0.775
[1,   300] loss: 0.776
[1,   400] loss: 0.767

You can then augment this script using a number of other PyTorch utilities to save and load models, output training and validation accuracies, fine-tune a learning-rate schedule, etc. After training for 20 epochs with a learning rate of 0.01 and momentum of 0.9, our neural network attains a 87.9% train accuracy and a 75.5% validation accuracy, a further 6.8% improvement over the most successful least-squares approach thus far at 66.6%. We’ll include these additional bells and whistles in a new script.

Create a new file to hold the final face emotion detector which your live camera feed will use. This script contains the code above along with a command-line interface and an easy-to-import version of our code that will be used later. Additionally, it contains the hyperparameters tuned in advance, for a model with higher accuracy.

nano step_7_fer.py

Start with the following imports. This matches our previous file but additionally includes OpenCV as import cv2.

step_7_fer.py

from torch.utils.data import Dataset
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import torch
import cv2
import argparse

Directly beneath these imports, reuse your code from step_7_fer_simple.py to define the neural network:

step_7_fer.py

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 6, 3)
        self.conv3 = nn.Conv2d(6, 16, 3)
        self.fc1 = nn.Linear(16 * 4 * 4, 120)
        self.fc2 = nn.Linear(120, 48)
        self.fc3 = nn.Linear(48, 3)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = self.pool(F.relu(self.conv3(x)))
        x = x.view(-1, 16 * 4 * 4)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

Again, reuse the code for the Face Emotion Recognition dataset from step_7_fer_simple.py and add it to this file:

step_7_fer.py

class Fer2013Dataset(Dataset):
    """Face Emotion Recognition dataset.
    Utility for loading FER into PyTorch. Dataset curated by Pierre-Luc Carrier
    and Aaron Courville in 2013.
    Each sample is 1 x 1 x 48 x 48, and each label is a scalar.
    """

    def __init__(self, path: str):
        """
        Args:
            path: Path to `.np` file containing sample nxd and label nx1
        """
        with np.load(path) as data:
            self._samples = data['X']
            self._labels = data['Y']
        self._samples = self._samples.reshape((-1, 1, 48, 48))

        self.X = Variable(torch.from_numpy(self._samples)).float()
        self.Y = Variable(torch.from_numpy(self._labels)).float()

    def __len__(self):
        return len(self._labels)

    def __getitem__(self, idx):
        return {'image': self._samples[idx], 'label': self._labels[idx]}

Next, define a few utilities to evaluate the neural network’s performance. First, add an evaluate function which compares the neural network’s predicted emotion to the true emotion for a single image:

step_7_fer.py

def evaluate(outputs: Variable, labels: Variable, normalized: bool=True) -> float:
    """Evaluate neural network outputs against non-one-hotted labels."""
    Y = labels.data.numpy()
    Yhat = np.argmax(outputs.data.numpy(), axis=1)
    denom = Y.shape[0] if normalized else 1
    return float(np.sum(Yhat == Y) / denom)

Then add a function called batch_evaluate which applies the first function to all images:

step_7_fer.py

def batch_evaluate(net: Net, dataset: Dataset, batch_size: int=500) -> float:
    """Evaluate neural network in batches, if dataset is too large."""
    score = 0.0
    n = dataset.X.shape[0]
    for i in range(0, n, batch_size):
        x = dataset.X[i: i + batch_size]
        y = dataset.Y[i: i + batch_size]
        score += evaluate(net(x), y, False)
    return score / n

Now, define a function called get_image_to_emotion_predictor that takes in an image and outputs a predicted emotion, using a pretrained model:

step_7_fer.py

def get_image_to_emotion_predictor(model_path='assets/model_best.pth'):
    """Returns predictor, from image to emotion index."""
    net = Net().float()
    pretrained_model = torch.load(model_path)
    net.load_state_dict(pretrained_model['state_dict'])

    def predictor(image: np.array):
        """Translates images into emotion indices."""
        if image.shape[2] > 1:
            image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
        frame = cv2.resize(image, (48, 48)).reshape((1, 1, 48, 48))
        X = Variable(torch.from_numpy(frame)).float()
        return np.argmax(net(X).data.numpy(), axis=1)[0]
    return predictor

Finally, add the following code to define the main function to leverage the other utilities:

step_7_fer.py

def main():
    trainset = Fer2013Dataset('data/fer2013_train.npz')
    testset = Fer2013Dataset('data/fer2013_test.npz')
    net = Net().float()

    pretrained_model = torch.load("assets/model_best.pth")
    net.load_state_dict(pretrained_model['state_dict'])

    train_acc = batch_evaluate(net, trainset, batch_size=500)
    print('Training accuracy: %.3f' % train_acc)
    test_acc = batch_evaluate(net, testset, batch_size=500)
    print('Validation accuracy: %.3f' % test_acc)

if __name__ == '__main__':
    main()

This loads a pretrained neural network and evaluates its performance on the provided Face Emotion Recognition dataset. Specifically, the script outputs accuracy on the images we used for training, as well as a separate set of images we put aside for testing purposes.

Double-check that your file matches the following:

step_7_fer.py

from torch.utils.data import Dataset
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import torch
import cv2
import argparse

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 6, 3)
        self.conv3 = nn.Conv2d(6, 16, 3)
        self.fc1 = nn.Linear(16 * 4 * 4, 120)
        self.fc2 = nn.Linear(120, 48)
        self.fc3 = nn.Linear(48, 3)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = self.pool(F.relu(self.conv3(x)))
        x = x.view(-1, 16 * 4 * 4)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

class Fer2013Dataset(Dataset):
    """Face Emotion Recognition dataset.
    Utility for loading FER into PyTorch. Dataset curated by Pierre-Luc Carrier
    and Aaron Courville in 2013.
    Each sample is 1 x 1 x 48 x 48, and each label is a scalar.
    """

    def __init__(self, path: str):
        """
        Args:
            path: Path to `.np` file containing sample nxd and label nx1
        """
        with np.load(path) as data:
            self._samples = data['X']
            self._labels = data['Y']
        self._samples = self._samples.reshape((-1, 1, 48, 48))

        self.X = Variable(torch.from_numpy(self._samples)).float()
        self.Y = Variable(torch.from_numpy(self._labels)).float()

    def __len__(self):
        return len(self._labels)

    def __getitem__(self, idx):
        return {'image': self._samples[idx], 'label': self._labels[idx]}

def evaluate(outputs: Variable, labels: Variable, normalized: bool=True) -> float:
    """Evaluate neural network outputs against non-one-hotted labels."""
    Y = labels.data.numpy()
    Yhat = np.argmax(outputs.data.numpy(), axis=1)
    denom = Y.shape[0] if normalized else 1
    return float(np.sum(Yhat == Y) / denom)

def batch_evaluate(net: Net, dataset: Dataset, batch_size: int=500) -> float:
    """Evaluate neural network in batches, if dataset is too large."""
    score = 0.0
    n = dataset.X.shape[0]
    for i in range(0, n, batch_size):
        x = dataset.X[i: i + batch_size]
        y = dataset.Y[i: i + batch_size]
        score += evaluate(net(x), y, False)
    return score / n

def get_image_to_emotion_predictor(model_path='assets/model_best.pth'):
    """Returns predictor, from image to emotion index."""
    net = Net().float()
    pretrained_model = torch.load(model_path)
    net.load_state_dict(pretrained_model['state_dict'])

    def predictor(image: np.array):
        """Translates images into emotion indices."""
        if image.shape[2] > 1:
            image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
        frame = cv2.resize(image, (48, 48)).reshape((1, 1, 48, 48))
        X = Variable(torch.from_numpy(frame)).float()
        return np.argmax(net(X).data.numpy(), axis=1)[0]
    return predictor

def main():
    trainset = Fer2013Dataset('data/fer2013_train.npz')
    testset = Fer2013Dataset('data/fer2013_test.npz')
    net = Net().float()

    pretrained_model = torch.load("assets/model_best.pth")
    net.load_state_dict(pretrained_model['state_dict'])

    train_acc = batch_evaluate(net, trainset, batch_size=500)
    print('Training accuracy: %.3f' % train_acc)
    test_acc = batch_evaluate(net, testset, batch_size=500)
    print('Validation accuracy: %.3f' % test_acc)

if __name__ == '__main__':
    main(

Save the file and exit your editor.

As before, with the face detector, download pre-trained model parameters and save them to your assets folder with the following command:

wget -O assets/model_best.pth https://github.com/alvinwan/emotion-based-dog-filter/raw/master/src/assets/model_best.pth

Run the script to use and evaluate the pre-trained model:

python step_7_fer.py

This will output the following:

Output
Training accuracy: 0.879
Validation accuracy: 0.755

At this point, you’ve built a pretty accurate face-emotion classifier. In essence, our model can correctly disambiguate between faces that are happy, sad, and surprised eight out of ten times. This is a reasonably good model, so you can now move on to using this face-emotion classifier to determine which dog mask to apply to faces.

Step 8 — Finishing the Emotion-Based Dog Filter

Before integrating our brand-new face-emotion classifier, we will need animal masks to pick from. We’ll use a Dalmation mask and a Sheepdog mask:

code face recognition python

face detection python

Execute these commands to download both masks to your assets folder:

wget -O assets/dalmation.png https://assets.digitalocean.com/articles/python3_dogfilter/E9ax7PI.png  # dalmation
wget -O assets/sheepdog.png https://assets.digitalocean.com/articles/python3_dogfilter/HveFdkg.png  # sheepdog

Now let’s use the masks in our filter. Start by duplicating the step_4_dog_mask.py file:

cp step_4_dog_mask.py step_8_dog_emotion_mask.py

Open the new Python script.

nano step_8_dog_emotion_mask.py

Insert a new line at the top of the script to import the emotion predictor:

step_8_dog_emotion_mask.py

from step_7_fer import get_image_to_emotion_predictor
...

Then, in the main() function, locate this line:

step_8_dog_emotion_mask.py

    mask = cv2.imread('assets/dog.png')

Replace it with the following to load the new masks and aggregate all masks into a tuple:

step_8_dog_emotion_mask.py

    mask0 = cv2.imread('assets/dog.png')
    mask1 = cv2.imread('assets/dalmation.png')
    mask2 = cv2.imread('assets/sheepdog.png')
    masks = (mask0, mask1, mask2)

Add a line break, and then add this code to create the emotion predictor.

step_8_dog_emotion_mask.py

    # get emotion predictor
    predictor = get_image_to_emotion_predictor()

Your main function should now match the following:

step_8_dog_emotion_mask.py

def main():
    cap = cv2.VideoCapture(0)

    # load mask
    mask0 = cv2.imread('assets/dog.png')
    mask1 = cv2.imread('assets/dalmation.png')
    mask2 = cv2.imread('assets/sheepdog.png')
    masks = (mask0, mask1, mask2)

    # get emotion predictor
    predictor = get_image_to_emotion_predictor()

    # initialize front face classifier
    ...

Next, locate these lines:

step_8_dog_emotion_mask.py

            # apply mask
            frame[y0: y1, x0: x1] = apply_mask(frame[y0: y1, x0: x1], mask)

Insert the following line below the # apply mask line to select the appropriate mask by using the predictor:

step_8_dog_emotion_mask.py

            # apply mask
            mask = masks[predictor(frame[y:y+h, x: x+w])]
            frame[y0: y1, x0: x1] = apply_mask(frame[y0: y1, x0: x1], mask)


The completed file should look like this:

step_8_dog_emotion_mask.py

"""Test for face detection"""

from step_7_fer import get_image_to_emotion_predictor
import numpy as np
import cv2

def apply_mask(face: np.array, mask: np.array) -> np.array:
    """Add the mask to the provided face, and return the face with mask."""
    mask_h, mask_w, _ = mask.shape
    face_h, face_w, _ = face.shape

    # Resize the mask to fit on face
    factor = min(face_h / mask_h, face_w / mask_w)
    new_mask_w = int(factor * mask_w)
    new_mask_h = int(factor * mask_h)
    new_mask_shape = (new_mask_w, new_mask_h)
    resized_mask = cv2.resize(mask, new_mask_shape)

    # Add mask to face - ensure mask is centered
    face_with_mask = face.copy()
    non_white_pixels = (resized_mask < 250).all(axis=2)
    off_h = int((face_h - new_mask_h) / 2)
    off_w = int((face_w - new_mask_w) / 2)
    face_with_mask[off_h: off_h+new_mask_h, off_w: off_w+new_mask_w][non_white_pixels] = \
         resized_mask[non_white_pixels]

    return face_with_mask

def main():

    cap = cv2.VideoCapture(0)
    # load mask
    mask0 = cv2.imread('assets/dog.png')
    mask1 = cv2.imread('assets/dalmation.png')
    mask2 = cv2.imread('assets/sheepdog.png')
    masks = (mask0, mask1, mask2)

    # get emotion predictor
    predictor = get_image_to_emotion_predictor()

    # initialize front face classifier
    cascade = cv2.CascadeClassifier("assets/haarcascade_frontalface_default.xml")

    while True:
        # Capture frame-by-frame
        ret, frame = cap.read()
        frame_h, frame_w, _ = frame.shape

        # Convert to black-and-white
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        blackwhite = cv2.equalizeHist(gray)

        rects = cascade.detectMultiScale(
            blackwhite, scaleFactor=1.3, minNeighbors=4, minSize=(30, 30),
            flags=cv2.CASCADE_SCALE_IMAGE)

        for x, y, w, h in rects:
            # crop a frame slightly larger than the face
            y0, y1 = int(y - 0.25*h), int(y + 0.75*h)
            x0, x1 = x, x + w
            # give up if the cropped frame would be out-of-bounds
            if x0 < 0 or y0 < 0 or x1 > frame_w or y1 > frame_h:
                continue
            # apply mask
            mask = masks[predictor(frame[y:y+h, x: x+w])]
            frame[y0: y1, x0: x1] = apply_mask(frame[y0: y1, x0: x1], mask)

        # Display the resulting frame
        cv2.imshow('frame', frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    cap.release()
    cv2.destroyAllWindows()

if __name__ == '__main__':
    main()

Save and exit your editor. Now launch the script:

python step_8_dog_emotion_mask.py

Now try it out! Smiling will register as “happy” and show the original dog. A neutral face or a frown will register as “sad” and yield the dalmation. A face of “surprise,” with a nice big jaw drop, will yield the sheepdog.

face detection python

This concludes our emotion-based dog filter and foray into computer vision.

Conclusion

In this tutorial, you built a face detector and dog filter using computer vision and employed machine learning models to apply masks based on detected emotions.

Machine learning is widely applicable. However, it’s up to the practitioner to consider the ethical implications of each application when applying machine learning. The application you built in this tutorial was a fun exercise, but remember that you relied on OpenCV and an existing dataset to identify faces, rather than supplying your own data to train the models. The data and models used have significant impacts on how a program works.

For example, imagine a job search engine where the models were trained with data about candidates. such as race, gender, age, culture, first language, or other factors. And perhaps the developers trained a model that enforces sparsity, which ends up reducing the feature space to a subspace where gender explains most of the variance. As a result, the model influences candidate job searches and even company selection processes based primarily on gender. Now consider more complex situations where the model is less interpretable and you don’t know what a particular feature corresponds to. You can learn more about this in Equality of Opportunity in Machine Learning by Professor Moritz Hardt at UC Berkeley.

There can be an overwhelming magnitude of uncertainty in machine learning. To understand this randomness and complexity, you’ll have to develop both mathematical intuitions and probabilistic thinking skills. As a practitioner, it is up to you to dig into the theoretical underpinnings of machine learning.

#python #opencv #machine-learning

What is GEEK

Buddha Community

Build an Emotion-Based Dog Filter in Python 3 using Computer Vision
Veronica  Roob

Veronica Roob

1653475560

A Pure PHP Implementation Of The MessagePack Serialization Format

msgpack.php

A pure PHP implementation of the MessagePack serialization format.

Features

Installation

The recommended way to install the library is through Composer:

composer require rybakit/msgpack

Usage

Packing

To pack values you can either use an instance of a Packer:

$packer = new Packer();
$packed = $packer->pack($value);

or call a static method on the MessagePack class:

$packed = MessagePack::pack($value);

In the examples above, the method pack automatically packs a value depending on its type. However, not all PHP types can be uniquely translated to MessagePack types. For example, the MessagePack format defines map and array types, which are represented by a single array type in PHP. By default, the packer will pack a PHP array as a MessagePack array if it has sequential numeric keys, starting from 0 and as a MessagePack map otherwise:

$mpArr1 = $packer->pack([1, 2]);               // MP array [1, 2]
$mpArr2 = $packer->pack([0 => 1, 1 => 2]);     // MP array [1, 2]
$mpMap1 = $packer->pack([0 => 1, 2 => 3]);     // MP map {0: 1, 2: 3}
$mpMap2 = $packer->pack([1 => 2, 2 => 3]);     // MP map {1: 2, 2: 3}
$mpMap3 = $packer->pack(['a' => 1, 'b' => 2]); // MP map {a: 1, b: 2}

However, sometimes you need to pack a sequential array as a MessagePack map. To do this, use the packMap method:

$mpMap = $packer->packMap([1, 2]); // {0: 1, 1: 2}

Here is a list of type-specific packing methods:

$packer->packNil();           // MP nil
$packer->packBool(true);      // MP bool
$packer->packInt(42);         // MP int
$packer->packFloat(M_PI);     // MP float (32 or 64)
$packer->packFloat32(M_PI);   // MP float 32
$packer->packFloat64(M_PI);   // MP float 64
$packer->packStr('foo');      // MP str
$packer->packBin("\x80");     // MP bin
$packer->packArray([1, 2]);   // MP array
$packer->packMap(['a' => 1]); // MP map
$packer->packExt(1, "\xaa");  // MP ext

Check the "Custom types" section below on how to pack custom types.

Packing options

The Packer object supports a number of bitmask-based options for fine-tuning the packing process (defaults are in bold):

NameDescription
FORCE_STRForces PHP strings to be packed as MessagePack UTF-8 strings
FORCE_BINForces PHP strings to be packed as MessagePack binary data
DETECT_STR_BINDetects MessagePack str/bin type automatically
  
FORCE_ARRForces PHP arrays to be packed as MessagePack arrays
FORCE_MAPForces PHP arrays to be packed as MessagePack maps
DETECT_ARR_MAPDetects MessagePack array/map type automatically
  
FORCE_FLOAT32Forces PHP floats to be packed as 32-bits MessagePack floats
FORCE_FLOAT64Forces PHP floats to be packed as 64-bits MessagePack floats

The type detection mode (DETECT_STR_BIN/DETECT_ARR_MAP) adds some overhead which can be noticed when you pack large (16- and 32-bit) arrays or strings. However, if you know the value type in advance (for example, you only work with UTF-8 strings or/and associative arrays), you can eliminate this overhead by forcing the packer to use the appropriate type, which will save it from running the auto-detection routine. Another option is to explicitly specify the value type. The library provides 2 auxiliary classes for this, Map and Bin. Check the "Custom types" section below for details.

Examples:

// detect str/bin type and pack PHP 64-bit floats (doubles) to MP 32-bit floats
$packer = new Packer(PackOptions::DETECT_STR_BIN | PackOptions::FORCE_FLOAT32);

// these will throw MessagePack\Exception\InvalidOptionException
$packer = new Packer(PackOptions::FORCE_STR | PackOptions::FORCE_BIN);
$packer = new Packer(PackOptions::FORCE_FLOAT32 | PackOptions::FORCE_FLOAT64);

Unpacking

To unpack data you can either use an instance of a BufferUnpacker:

$unpacker = new BufferUnpacker();

$unpacker->reset($packed);
$value = $unpacker->unpack();

or call a static method on the MessagePack class:

$value = MessagePack::unpack($packed);

If the packed data is received in chunks (e.g. when reading from a stream), use the tryUnpack method, which attempts to unpack data and returns an array of unpacked messages (if any) instead of throwing an InsufficientDataException:

while ($chunk = ...) {
    $unpacker->append($chunk);
    if ($messages = $unpacker->tryUnpack()) {
        return $messages;
    }
}

If you want to unpack from a specific position in a buffer, use seek:

$unpacker->seek(42); // set position equal to 42 bytes
$unpacker->seek(-8); // set position to 8 bytes before the end of the buffer

To skip bytes from the current position, use skip:

$unpacker->skip(10); // set position to 10 bytes ahead of the current position

To get the number of remaining (unread) bytes in the buffer:

$unreadBytesCount = $unpacker->getRemainingCount();

To check whether the buffer has unread data:

$hasUnreadBytes = $unpacker->hasRemaining();

If needed, you can remove already read data from the buffer by calling:

$releasedBytesCount = $unpacker->release();

With the read method you can read raw (packed) data:

$packedData = $unpacker->read(2); // read 2 bytes

Besides the above methods BufferUnpacker provides type-specific unpacking methods, namely:

$unpacker->unpackNil();   // PHP null
$unpacker->unpackBool();  // PHP bool
$unpacker->unpackInt();   // PHP int
$unpacker->unpackFloat(); // PHP float
$unpacker->unpackStr();   // PHP UTF-8 string
$unpacker->unpackBin();   // PHP binary string
$unpacker->unpackArray(); // PHP sequential array
$unpacker->unpackMap();   // PHP associative array
$unpacker->unpackExt();   // PHP MessagePack\Type\Ext object

Unpacking options

The BufferUnpacker object supports a number of bitmask-based options for fine-tuning the unpacking process (defaults are in bold):

NameDescription
BIGINT_AS_STRConverts overflowed integers to strings [1]
BIGINT_AS_GMPConverts overflowed integers to GMP objects [2]
BIGINT_AS_DECConverts overflowed integers to Decimal\Decimal objects [3]

1. The binary MessagePack format has unsigned 64-bit as its largest integer data type, but PHP does not support such integers, which means that an overflow can occur during unpacking.

2. Make sure the GMP extension is enabled.

3. Make sure the Decimal extension is enabled.

Examples:

$packedUint64 = "\xcf"."\xff\xff\xff\xff"."\xff\xff\xff\xff";

$unpacker = new BufferUnpacker($packedUint64);
var_dump($unpacker->unpack()); // string(20) "18446744073709551615"

$unpacker = new BufferUnpacker($packedUint64, UnpackOptions::BIGINT_AS_GMP);
var_dump($unpacker->unpack()); // object(GMP) {...}

$unpacker = new BufferUnpacker($packedUint64, UnpackOptions::BIGINT_AS_DEC);
var_dump($unpacker->unpack()); // object(Decimal\Decimal) {...}

Custom types

In addition to the basic types, the library provides functionality to serialize and deserialize arbitrary types. This can be done in several ways, depending on your use case. Let's take a look at them.

Type objects

If you need to serialize an instance of one of your classes into one of the basic MessagePack types, the best way to do this is to implement the CanBePacked interface in the class. A good example of such a class is the Map type class that comes with the library. This type is useful when you want to explicitly specify that a given PHP array should be packed as a MessagePack map without triggering an automatic type detection routine:

$packer = new Packer();

$packedMap = $packer->pack(new Map([1, 2, 3]));
$packedArray = $packer->pack([1, 2, 3]);

More type examples can be found in the src/Type directory.

Type transformers

As with type objects, type transformers are only responsible for serializing values. They should be used when you need to serialize a value that does not implement the CanBePacked interface. Examples of such values could be instances of built-in or third-party classes that you don't own, or non-objects such as resources.

A transformer class must implement the CanPack interface. To use a transformer, it must first be registered in the packer. Here is an example of how to serialize PHP streams into the MessagePack bin format type using one of the supplied transformers, StreamTransformer:

$packer = new Packer(null, [new StreamTransformer()]);

$packedBin = $packer->pack(fopen('/path/to/file', 'r+'));

More type transformer examples can be found in the src/TypeTransformer directory.

Extensions

In contrast to the cases described above, extensions are intended to handle extension types and are responsible for both serialization and deserialization of values (types).

An extension class must implement the Extension interface. To use an extension, it must first be registered in the packer and the unpacker.

The MessagePack specification divides extension types into two groups: predefined and application-specific. Currently, there is only one predefined type in the specification, Timestamp.

Timestamp

The Timestamp extension type is a predefined type. Support for this type in the library is done through the TimestampExtension class. This class is responsible for handling Timestamp objects, which represent the number of seconds and optional adjustment in nanoseconds:

$timestampExtension = new TimestampExtension();

$packer = new Packer();
$packer = $packer->extendWith($timestampExtension);

$unpacker = new BufferUnpacker();
$unpacker = $unpacker->extendWith($timestampExtension);

$packedTimestamp = $packer->pack(Timestamp::now());
$timestamp = $unpacker->reset($packedTimestamp)->unpack();

$seconds = $timestamp->getSeconds();
$nanoseconds = $timestamp->getNanoseconds();

When using the MessagePack class, the Timestamp extension is already registered:

$packedTimestamp = MessagePack::pack(Timestamp::now());
$timestamp = MessagePack::unpack($packedTimestamp);

Application-specific extensions

In addition, the format can be extended with your own types. For example, to make the built-in PHP DateTime objects first-class citizens in your code, you can create a corresponding extension, as shown in the example. Please note, that custom extensions have to be registered with a unique extension ID (an integer from 0 to 127).

More extension examples can be found in the examples/MessagePack directory.

To learn more about how extension types can be useful, check out this article.

Exceptions

If an error occurs during packing/unpacking, a PackingFailedException or an UnpackingFailedException will be thrown, respectively. In addition, an InsufficientDataException can be thrown during unpacking.

An InvalidOptionException will be thrown in case an invalid option (or a combination of mutually exclusive options) is used.

Tests

Run tests as follows:

vendor/bin/phpunit

Also, if you already have Docker installed, you can run the tests in a docker container. First, create a container:

./dockerfile.sh | docker build -t msgpack -

The command above will create a container named msgpack with PHP 8.1 runtime. You may change the default runtime by defining the PHP_IMAGE environment variable:

PHP_IMAGE='php:8.0-cli' ./dockerfile.sh | docker build -t msgpack -

See a list of various images here.

Then run the unit tests:

docker run --rm -v $PWD:/msgpack -w /msgpack msgpack

Fuzzing

To ensure that the unpacking works correctly with malformed/semi-malformed data, you can use a testing technique called Fuzzing. The library ships with a help file (target) for PHP-Fuzzer and can be used as follows:

php-fuzzer fuzz tests/fuzz_buffer_unpacker.php

Performance

To check performance, run:

php -n -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php

Example output

Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000

=============================================
Test/Target            Packer  BufferUnpacker
---------------------------------------------
nil .................. 0.0030 ........ 0.0139
false ................ 0.0037 ........ 0.0144
true ................. 0.0040 ........ 0.0137
7-bit uint #1 ........ 0.0052 ........ 0.0120
7-bit uint #2 ........ 0.0059 ........ 0.0114
7-bit uint #3 ........ 0.0061 ........ 0.0119
5-bit sint #1 ........ 0.0067 ........ 0.0126
5-bit sint #2 ........ 0.0064 ........ 0.0132
5-bit sint #3 ........ 0.0066 ........ 0.0135
8-bit uint #1 ........ 0.0078 ........ 0.0200
8-bit uint #2 ........ 0.0077 ........ 0.0212
8-bit uint #3 ........ 0.0086 ........ 0.0203
16-bit uint #1 ....... 0.0111 ........ 0.0271
16-bit uint #2 ....... 0.0115 ........ 0.0260
16-bit uint #3 ....... 0.0103 ........ 0.0273
32-bit uint #1 ....... 0.0116 ........ 0.0326
32-bit uint #2 ....... 0.0118 ........ 0.0332
32-bit uint #3 ....... 0.0127 ........ 0.0325
64-bit uint #1 ....... 0.0140 ........ 0.0277
64-bit uint #2 ....... 0.0134 ........ 0.0294
64-bit uint #3 ....... 0.0134 ........ 0.0281
8-bit int #1 ......... 0.0086 ........ 0.0241
8-bit int #2 ......... 0.0089 ........ 0.0225
8-bit int #3 ......... 0.0085 ........ 0.0229
16-bit int #1 ........ 0.0118 ........ 0.0280
16-bit int #2 ........ 0.0121 ........ 0.0270
16-bit int #3 ........ 0.0109 ........ 0.0274
32-bit int #1 ........ 0.0128 ........ 0.0346
32-bit int #2 ........ 0.0118 ........ 0.0339
32-bit int #3 ........ 0.0135 ........ 0.0368
64-bit int #1 ........ 0.0138 ........ 0.0276
64-bit int #2 ........ 0.0132 ........ 0.0286
64-bit int #3 ........ 0.0137 ........ 0.0274
64-bit int #4 ........ 0.0180 ........ 0.0285
64-bit float #1 ...... 0.0134 ........ 0.0284
64-bit float #2 ...... 0.0125 ........ 0.0275
64-bit float #3 ...... 0.0126 ........ 0.0283
fix string #1 ........ 0.0035 ........ 0.0133
fix string #2 ........ 0.0094 ........ 0.0216
fix string #3 ........ 0.0094 ........ 0.0222
fix string #4 ........ 0.0091 ........ 0.0241
8-bit string #1 ...... 0.0122 ........ 0.0301
8-bit string #2 ...... 0.0118 ........ 0.0304
8-bit string #3 ...... 0.0119 ........ 0.0315
16-bit string #1 ..... 0.0150 ........ 0.0388
16-bit string #2 ..... 0.1545 ........ 0.1665
32-bit string ........ 0.1570 ........ 0.1756
wide char string #1 .. 0.0091 ........ 0.0236
wide char string #2 .. 0.0122 ........ 0.0313
8-bit binary #1 ...... 0.0100 ........ 0.0302
8-bit binary #2 ...... 0.0123 ........ 0.0324
8-bit binary #3 ...... 0.0126 ........ 0.0327
16-bit binary ........ 0.0168 ........ 0.0372
32-bit binary ........ 0.1588 ........ 0.1754
fix array #1 ......... 0.0042 ........ 0.0131
fix array #2 ......... 0.0294 ........ 0.0367
fix array #3 ......... 0.0412 ........ 0.0472
16-bit array #1 ...... 0.1378 ........ 0.1596
16-bit array #2 ........... S ............. S
32-bit array .............. S ............. S
complex array ........ 0.1865 ........ 0.2283
fix map #1 ........... 0.0725 ........ 0.1048
fix map #2 ........... 0.0319 ........ 0.0405
fix map #3 ........... 0.0356 ........ 0.0665
fix map #4 ........... 0.0465 ........ 0.0497
16-bit map #1 ........ 0.2540 ........ 0.3028
16-bit map #2 ............. S ............. S
32-bit map ................ S ............. S
complex map .......... 0.2372 ........ 0.2710
fixext 1 ............. 0.0283 ........ 0.0358
fixext 2 ............. 0.0291 ........ 0.0371
fixext 4 ............. 0.0302 ........ 0.0355
fixext 8 ............. 0.0288 ........ 0.0384
fixext 16 ............ 0.0293 ........ 0.0359
8-bit ext ............ 0.0302 ........ 0.0439
16-bit ext ........... 0.0334 ........ 0.0499
32-bit ext ........... 0.1845 ........ 0.1888
32-bit timestamp #1 .. 0.0337 ........ 0.0547
32-bit timestamp #2 .. 0.0335 ........ 0.0560
64-bit timestamp #1 .. 0.0371 ........ 0.0575
64-bit timestamp #2 .. 0.0374 ........ 0.0542
64-bit timestamp #3 .. 0.0356 ........ 0.0533
96-bit timestamp #1 .. 0.0362 ........ 0.0699
96-bit timestamp #2 .. 0.0381 ........ 0.0701
96-bit timestamp #3 .. 0.0367 ........ 0.0687
=============================================
Total                  2.7618          4.0820
Skipped                     4               4
Failed                      0               0
Ignored                     0               0

With JIT:

php -n -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.jit_buffer_size=64M -dopcache.jit=tracing -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php

Example output

Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000

=============================================
Test/Target            Packer  BufferUnpacker
---------------------------------------------
nil .................. 0.0005 ........ 0.0054
false ................ 0.0004 ........ 0.0059
true ................. 0.0004 ........ 0.0059
7-bit uint #1 ........ 0.0010 ........ 0.0047
7-bit uint #2 ........ 0.0010 ........ 0.0046
7-bit uint #3 ........ 0.0010 ........ 0.0046
5-bit sint #1 ........ 0.0025 ........ 0.0046
5-bit sint #2 ........ 0.0023 ........ 0.0046
5-bit sint #3 ........ 0.0024 ........ 0.0045
8-bit uint #1 ........ 0.0043 ........ 0.0081
8-bit uint #2 ........ 0.0043 ........ 0.0079
8-bit uint #3 ........ 0.0041 ........ 0.0080
16-bit uint #1 ....... 0.0064 ........ 0.0095
16-bit uint #2 ....... 0.0064 ........ 0.0091
16-bit uint #3 ....... 0.0064 ........ 0.0094
32-bit uint #1 ....... 0.0085 ........ 0.0114
32-bit uint #2 ....... 0.0077 ........ 0.0122
32-bit uint #3 ....... 0.0077 ........ 0.0120
64-bit uint #1 ....... 0.0085 ........ 0.0159
64-bit uint #2 ....... 0.0086 ........ 0.0157
64-bit uint #3 ....... 0.0086 ........ 0.0158
8-bit int #1 ......... 0.0042 ........ 0.0080
8-bit int #2 ......... 0.0042 ........ 0.0080
8-bit int #3 ......... 0.0042 ........ 0.0081
16-bit int #1 ........ 0.0065 ........ 0.0095
16-bit int #2 ........ 0.0065 ........ 0.0090
16-bit int #3 ........ 0.0056 ........ 0.0085
32-bit int #1 ........ 0.0067 ........ 0.0107
32-bit int #2 ........ 0.0066 ........ 0.0106
32-bit int #3 ........ 0.0063 ........ 0.0104
64-bit int #1 ........ 0.0072 ........ 0.0162
64-bit int #2 ........ 0.0073 ........ 0.0174
64-bit int #3 ........ 0.0072 ........ 0.0164
64-bit int #4 ........ 0.0077 ........ 0.0161
64-bit float #1 ...... 0.0053 ........ 0.0135
64-bit float #2 ...... 0.0053 ........ 0.0135
64-bit float #3 ...... 0.0052 ........ 0.0135
fix string #1 ....... -0.0002 ........ 0.0044
fix string #2 ........ 0.0035 ........ 0.0067
fix string #3 ........ 0.0035 ........ 0.0077
fix string #4 ........ 0.0033 ........ 0.0078
8-bit string #1 ...... 0.0059 ........ 0.0110
8-bit string #2 ...... 0.0063 ........ 0.0121
8-bit string #3 ...... 0.0064 ........ 0.0124
16-bit string #1 ..... 0.0099 ........ 0.0146
16-bit string #2 ..... 0.1522 ........ 0.1474
32-bit string ........ 0.1511 ........ 0.1483
wide char string #1 .. 0.0039 ........ 0.0084
wide char string #2 .. 0.0073 ........ 0.0123
8-bit binary #1 ...... 0.0040 ........ 0.0112
8-bit binary #2 ...... 0.0075 ........ 0.0123
8-bit binary #3 ...... 0.0077 ........ 0.0129
16-bit binary ........ 0.0096 ........ 0.0145
32-bit binary ........ 0.1535 ........ 0.1479
fix array #1 ......... 0.0008 ........ 0.0061
fix array #2 ......... 0.0121 ........ 0.0165
fix array #3 ......... 0.0193 ........ 0.0222
16-bit array #1 ...... 0.0607 ........ 0.0479
16-bit array #2 ........... S ............. S
32-bit array .............. S ............. S
complex array ........ 0.0749 ........ 0.0824
fix map #1 ........... 0.0329 ........ 0.0431
fix map #2 ........... 0.0161 ........ 0.0189
fix map #3 ........... 0.0205 ........ 0.0262
fix map #4 ........... 0.0252 ........ 0.0205
16-bit map #1 ........ 0.1016 ........ 0.0927
16-bit map #2 ............. S ............. S
32-bit map ................ S ............. S
complex map .......... 0.1096 ........ 0.1030
fixext 1 ............. 0.0157 ........ 0.0161
fixext 2 ............. 0.0175 ........ 0.0183
fixext 4 ............. 0.0156 ........ 0.0185
fixext 8 ............. 0.0163 ........ 0.0184
fixext 16 ............ 0.0164 ........ 0.0182
8-bit ext ............ 0.0158 ........ 0.0207
16-bit ext ........... 0.0203 ........ 0.0219
32-bit ext ........... 0.1614 ........ 0.1539
32-bit timestamp #1 .. 0.0195 ........ 0.0249
32-bit timestamp #2 .. 0.0188 ........ 0.0260
64-bit timestamp #1 .. 0.0207 ........ 0.0281
64-bit timestamp #2 .. 0.0212 ........ 0.0291
64-bit timestamp #3 .. 0.0207 ........ 0.0295
96-bit timestamp #1 .. 0.0222 ........ 0.0358
96-bit timestamp #2 .. 0.0228 ........ 0.0353
96-bit timestamp #3 .. 0.0210 ........ 0.0319
=============================================
Total                  1.6432          1.9674
Skipped                     4               4
Failed                      0               0
Ignored                     0               0

You may change default benchmark settings by defining the following environment variables:

NameDefault
MP_BENCH_TARGETSpure_p,pure_u, see a list of available targets
MP_BENCH_ITERATIONS100_000
MP_BENCH_DURATIONnot set
MP_BENCH_ROUNDS3
MP_BENCH_TESTS-@slow, see a list of available tests

For example:

export MP_BENCH_TARGETS=pure_p
export MP_BENCH_ITERATIONS=1000000
export MP_BENCH_ROUNDS=5
# a comma separated list of test names
export MP_BENCH_TESTS='complex array, complex map'
# or a group name
# export MP_BENCH_TESTS='-@slow' // @pecl_comp
# or a regexp
# export MP_BENCH_TESTS='/complex (array|map)/'

Another example, benchmarking both the library and the PECL extension:

MP_BENCH_TARGETS=pure_p,pure_u,pecl_p,pecl_u \
php -n -dextension=msgpack.so -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php

Example output

Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000

===========================================================================
Test/Target            Packer  BufferUnpacker  msgpack_pack  msgpack_unpack
---------------------------------------------------------------------------
nil .................. 0.0031 ........ 0.0141 ...... 0.0055 ........ 0.0064
false ................ 0.0039 ........ 0.0154 ...... 0.0056 ........ 0.0053
true ................. 0.0038 ........ 0.0139 ...... 0.0056 ........ 0.0044
7-bit uint #1 ........ 0.0061 ........ 0.0110 ...... 0.0059 ........ 0.0046
7-bit uint #2 ........ 0.0065 ........ 0.0119 ...... 0.0042 ........ 0.0029
7-bit uint #3 ........ 0.0054 ........ 0.0117 ...... 0.0045 ........ 0.0025
5-bit sint #1 ........ 0.0047 ........ 0.0103 ...... 0.0038 ........ 0.0022
5-bit sint #2 ........ 0.0048 ........ 0.0117 ...... 0.0038 ........ 0.0022
5-bit sint #3 ........ 0.0046 ........ 0.0102 ...... 0.0038 ........ 0.0023
8-bit uint #1 ........ 0.0063 ........ 0.0174 ...... 0.0039 ........ 0.0031
8-bit uint #2 ........ 0.0063 ........ 0.0167 ...... 0.0040 ........ 0.0029
8-bit uint #3 ........ 0.0063 ........ 0.0168 ...... 0.0039 ........ 0.0030
16-bit uint #1 ....... 0.0092 ........ 0.0222 ...... 0.0049 ........ 0.0030
16-bit uint #2 ....... 0.0096 ........ 0.0227 ...... 0.0042 ........ 0.0046
16-bit uint #3 ....... 0.0123 ........ 0.0274 ...... 0.0059 ........ 0.0051
32-bit uint #1 ....... 0.0136 ........ 0.0331 ...... 0.0060 ........ 0.0048
32-bit uint #2 ....... 0.0130 ........ 0.0336 ...... 0.0070 ........ 0.0048
32-bit uint #3 ....... 0.0127 ........ 0.0329 ...... 0.0051 ........ 0.0048
64-bit uint #1 ....... 0.0126 ........ 0.0268 ...... 0.0055 ........ 0.0049
64-bit uint #2 ....... 0.0135 ........ 0.0281 ...... 0.0052 ........ 0.0046
64-bit uint #3 ....... 0.0131 ........ 0.0274 ...... 0.0069 ........ 0.0044
8-bit int #1 ......... 0.0077 ........ 0.0236 ...... 0.0058 ........ 0.0044
8-bit int #2 ......... 0.0087 ........ 0.0244 ...... 0.0058 ........ 0.0048
8-bit int #3 ......... 0.0084 ........ 0.0241 ...... 0.0055 ........ 0.0049
16-bit int #1 ........ 0.0112 ........ 0.0271 ...... 0.0048 ........ 0.0045
16-bit int #2 ........ 0.0124 ........ 0.0292 ...... 0.0057 ........ 0.0049
16-bit int #3 ........ 0.0118 ........ 0.0270 ...... 0.0058 ........ 0.0050
32-bit int #1 ........ 0.0137 ........ 0.0366 ...... 0.0058 ........ 0.0051
32-bit int #2 ........ 0.0133 ........ 0.0366 ...... 0.0056 ........ 0.0049
32-bit int #3 ........ 0.0129 ........ 0.0350 ...... 0.0052 ........ 0.0048
64-bit int #1 ........ 0.0145 ........ 0.0254 ...... 0.0034 ........ 0.0025
64-bit int #2 ........ 0.0097 ........ 0.0214 ...... 0.0034 ........ 0.0025
64-bit int #3 ........ 0.0096 ........ 0.0287 ...... 0.0059 ........ 0.0050
64-bit int #4 ........ 0.0143 ........ 0.0277 ...... 0.0059 ........ 0.0046
64-bit float #1 ...... 0.0134 ........ 0.0281 ...... 0.0057 ........ 0.0052
64-bit float #2 ...... 0.0141 ........ 0.0281 ...... 0.0057 ........ 0.0050
64-bit float #3 ...... 0.0144 ........ 0.0282 ...... 0.0057 ........ 0.0050
fix string #1 ........ 0.0036 ........ 0.0143 ...... 0.0066 ........ 0.0053
fix string #2 ........ 0.0107 ........ 0.0222 ...... 0.0065 ........ 0.0068
fix string #3 ........ 0.0116 ........ 0.0245 ...... 0.0063 ........ 0.0069
fix string #4 ........ 0.0105 ........ 0.0253 ...... 0.0083 ........ 0.0077
8-bit string #1 ...... 0.0126 ........ 0.0318 ...... 0.0075 ........ 0.0088
8-bit string #2 ...... 0.0121 ........ 0.0295 ...... 0.0076 ........ 0.0086
8-bit string #3 ...... 0.0125 ........ 0.0293 ...... 0.0130 ........ 0.0093
16-bit string #1 ..... 0.0159 ........ 0.0368 ...... 0.0117 ........ 0.0086
16-bit string #2 ..... 0.1547 ........ 0.1686 ...... 0.1516 ........ 0.1373
32-bit string ........ 0.1558 ........ 0.1729 ...... 0.1511 ........ 0.1396
wide char string #1 .. 0.0098 ........ 0.0237 ...... 0.0066 ........ 0.0065
wide char string #2 .. 0.0128 ........ 0.0291 ...... 0.0061 ........ 0.0082
8-bit binary #1 ........... I ............. I ........... F ............. I
8-bit binary #2 ........... I ............. I ........... F ............. I
8-bit binary #3 ........... I ............. I ........... F ............. I
16-bit binary ............. I ............. I ........... F ............. I
32-bit binary ............. I ............. I ........... F ............. I
fix array #1 ......... 0.0040 ........ 0.0129 ...... 0.0120 ........ 0.0058
fix array #2 ......... 0.0279 ........ 0.0390 ...... 0.0143 ........ 0.0165
fix array #3 ......... 0.0415 ........ 0.0463 ...... 0.0162 ........ 0.0187
16-bit array #1 ...... 0.1349 ........ 0.1628 ...... 0.0334 ........ 0.0341
16-bit array #2 ........... S ............. S ........... S ............. S
32-bit array .............. S ............. S ........... S ............. S
complex array ............. I ............. I ........... F ............. F
fix map #1 ................ I ............. I ........... F ............. I
fix map #2 ........... 0.0345 ........ 0.0391 ...... 0.0143 ........ 0.0168
fix map #3 ................ I ............. I ........... F ............. I
fix map #4 ........... 0.0459 ........ 0.0473 ...... 0.0151 ........ 0.0163
16-bit map #1 ........ 0.2518 ........ 0.2962 ...... 0.0400 ........ 0.0490
16-bit map #2 ............. S ............. S ........... S ............. S
32-bit map ................ S ............. S ........... S ............. S
complex map .......... 0.2380 ........ 0.2682 ...... 0.0545 ........ 0.0579
fixext 1 .................. I ............. I ........... F ............. F
fixext 2 .................. I ............. I ........... F ............. F
fixext 4 .................. I ............. I ........... F ............. F
fixext 8 .................. I ............. I ........... F ............. F
fixext 16 ................. I ............. I ........... F ............. F
8-bit ext ................. I ............. I ........... F ............. F
16-bit ext ................ I ............. I ........... F ............. F
32-bit ext ................ I ............. I ........... F ............. F
32-bit timestamp #1 ....... I ............. I ........... F ............. F
32-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #1 ....... I ............. I ........... F ............. F
64-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #3 ....... I ............. I ........... F ............. F
96-bit timestamp #1 ....... I ............. I ........... F ............. F
96-bit timestamp #2 ....... I ............. I ........... F ............. F
96-bit timestamp #3 ....... I ............. I ........... F ............. F
===========================================================================
Total                  1.5625          2.3866        0.7735          0.7243
Skipped                     4               4             4               4
Failed                      0               0            24              17
Ignored                    24              24             0               7

With JIT:

MP_BENCH_TARGETS=pure_p,pure_u,pecl_p,pecl_u \
php -n -dextension=msgpack.so -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.jit_buffer_size=64M -dopcache.jit=tracing -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php

Example output

Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000

===========================================================================
Test/Target            Packer  BufferUnpacker  msgpack_pack  msgpack_unpack
---------------------------------------------------------------------------
nil .................. 0.0001 ........ 0.0052 ...... 0.0053 ........ 0.0042
false ................ 0.0007 ........ 0.0060 ...... 0.0057 ........ 0.0043
true ................. 0.0008 ........ 0.0060 ...... 0.0056 ........ 0.0041
7-bit uint #1 ........ 0.0031 ........ 0.0046 ...... 0.0062 ........ 0.0041
7-bit uint #2 ........ 0.0021 ........ 0.0043 ...... 0.0062 ........ 0.0041
7-bit uint #3 ........ 0.0022 ........ 0.0044 ...... 0.0061 ........ 0.0040
5-bit sint #1 ........ 0.0030 ........ 0.0048 ...... 0.0062 ........ 0.0040
5-bit sint #2 ........ 0.0032 ........ 0.0046 ...... 0.0062 ........ 0.0040
5-bit sint #3 ........ 0.0031 ........ 0.0046 ...... 0.0062 ........ 0.0040
8-bit uint #1 ........ 0.0054 ........ 0.0079 ...... 0.0062 ........ 0.0050
8-bit uint #2 ........ 0.0051 ........ 0.0079 ...... 0.0064 ........ 0.0044
8-bit uint #3 ........ 0.0051 ........ 0.0082 ...... 0.0062 ........ 0.0044
16-bit uint #1 ....... 0.0077 ........ 0.0094 ...... 0.0065 ........ 0.0045
16-bit uint #2 ....... 0.0077 ........ 0.0094 ...... 0.0063 ........ 0.0045
16-bit uint #3 ....... 0.0077 ........ 0.0095 ...... 0.0064 ........ 0.0047
32-bit uint #1 ....... 0.0088 ........ 0.0119 ...... 0.0063 ........ 0.0043
32-bit uint #2 ....... 0.0089 ........ 0.0117 ...... 0.0062 ........ 0.0039
32-bit uint #3 ....... 0.0089 ........ 0.0118 ...... 0.0063 ........ 0.0044
64-bit uint #1 ....... 0.0097 ........ 0.0155 ...... 0.0063 ........ 0.0045
64-bit uint #2 ....... 0.0095 ........ 0.0153 ...... 0.0061 ........ 0.0045
64-bit uint #3 ....... 0.0096 ........ 0.0156 ...... 0.0063 ........ 0.0047
8-bit int #1 ......... 0.0053 ........ 0.0083 ...... 0.0062 ........ 0.0044
8-bit int #2 ......... 0.0052 ........ 0.0080 ...... 0.0062 ........ 0.0044
8-bit int #3 ......... 0.0052 ........ 0.0080 ...... 0.0062 ........ 0.0043
16-bit int #1 ........ 0.0089 ........ 0.0097 ...... 0.0069 ........ 0.0046
16-bit int #2 ........ 0.0075 ........ 0.0093 ...... 0.0063 ........ 0.0043
16-bit int #3 ........ 0.0075 ........ 0.0094 ...... 0.0062 ........ 0.0046
32-bit int #1 ........ 0.0086 ........ 0.0122 ...... 0.0063 ........ 0.0044
32-bit int #2 ........ 0.0087 ........ 0.0120 ...... 0.0066 ........ 0.0046
32-bit int #3 ........ 0.0086 ........ 0.0121 ...... 0.0060 ........ 0.0044
64-bit int #1 ........ 0.0096 ........ 0.0149 ...... 0.0060 ........ 0.0045
64-bit int #2 ........ 0.0096 ........ 0.0157 ...... 0.0062 ........ 0.0044
64-bit int #3 ........ 0.0096 ........ 0.0160 ...... 0.0063 ........ 0.0046
64-bit int #4 ........ 0.0097 ........ 0.0157 ...... 0.0061 ........ 0.0044
64-bit float #1 ...... 0.0079 ........ 0.0153 ...... 0.0056 ........ 0.0044
64-bit float #2 ...... 0.0079 ........ 0.0152 ...... 0.0057 ........ 0.0045
64-bit float #3 ...... 0.0079 ........ 0.0155 ...... 0.0057 ........ 0.0044
fix string #1 ........ 0.0010 ........ 0.0045 ...... 0.0071 ........ 0.0044
fix string #2 ........ 0.0048 ........ 0.0075 ...... 0.0070 ........ 0.0060
fix string #3 ........ 0.0048 ........ 0.0086 ...... 0.0068 ........ 0.0060
fix string #4 ........ 0.0050 ........ 0.0088 ...... 0.0070 ........ 0.0059
8-bit string #1 ...... 0.0081 ........ 0.0129 ...... 0.0069 ........ 0.0062
8-bit string #2 ...... 0.0086 ........ 0.0128 ...... 0.0069 ........ 0.0065
8-bit string #3 ...... 0.0086 ........ 0.0126 ...... 0.0115 ........ 0.0065
16-bit string #1 ..... 0.0105 ........ 0.0137 ...... 0.0128 ........ 0.0068
16-bit string #2 ..... 0.1510 ........ 0.1486 ...... 0.1526 ........ 0.1391
32-bit string ........ 0.1517 ........ 0.1475 ...... 0.1504 ........ 0.1370
wide char string #1 .. 0.0044 ........ 0.0085 ...... 0.0067 ........ 0.0057
wide char string #2 .. 0.0081 ........ 0.0125 ...... 0.0069 ........ 0.0063
8-bit binary #1 ........... I ............. I ........... F ............. I
8-bit binary #2 ........... I ............. I ........... F ............. I
8-bit binary #3 ........... I ............. I ........... F ............. I
16-bit binary ............. I ............. I ........... F ............. I
32-bit binary ............. I ............. I ........... F ............. I
fix array #1 ......... 0.0014 ........ 0.0059 ...... 0.0132 ........ 0.0055
fix array #2 ......... 0.0146 ........ 0.0156 ...... 0.0155 ........ 0.0148
fix array #3 ......... 0.0211 ........ 0.0229 ...... 0.0179 ........ 0.0180
16-bit array #1 ...... 0.0673 ........ 0.0498 ...... 0.0343 ........ 0.0388
16-bit array #2 ........... S ............. S ........... S ............. S
32-bit array .............. S ............. S ........... S ............. S
complex array ............. I ............. I ........... F ............. F
fix map #1 ................ I ............. I ........... F ............. I
fix map #2 ........... 0.0148 ........ 0.0180 ...... 0.0156 ........ 0.0179
fix map #3 ................ I ............. I ........... F ............. I
fix map #4 ........... 0.0252 ........ 0.0201 ...... 0.0214 ........ 0.0167
16-bit map #1 ........ 0.1027 ........ 0.0836 ...... 0.0388 ........ 0.0510
16-bit map #2 ............. S ............. S ........... S ............. S
32-bit map ................ S ............. S ........... S ............. S
complex map .......... 0.1104 ........ 0.1010 ...... 0.0556 ........ 0.0602
fixext 1 .................. I ............. I ........... F ............. F
fixext 2 .................. I ............. I ........... F ............. F
fixext 4 .................. I ............. I ........... F ............. F
fixext 8 .................. I ............. I ........... F ............. F
fixext 16 ................. I ............. I ........... F ............. F
8-bit ext ................. I ............. I ........... F ............. F
16-bit ext ................ I ............. I ........... F ............. F
32-bit ext ................ I ............. I ........... F ............. F
32-bit timestamp #1 ....... I ............. I ........... F ............. F
32-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #1 ....... I ............. I ........... F ............. F
64-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #3 ....... I ............. I ........... F ............. F
96-bit timestamp #1 ....... I ............. I ........... F ............. F
96-bit timestamp #2 ....... I ............. I ........... F ............. F
96-bit timestamp #3 ....... I ............. I ........... F ............. F
===========================================================================
Total                  0.9642          1.0909        0.8224          0.7213
Skipped                     4               4             4               4
Failed                      0               0            24              17
Ignored                    24              24             0               7

Note that the msgpack extension (v2.1.2) doesn't support ext, bin and UTF-8 str types.

License

The library is released under the MIT License. See the bundled LICENSE file for details.

Author: rybakit
Source Code: https://github.com/rybakit/msgpack.php
License: MIT License

#php 

Ray  Patel

Ray Patel

1619510796

Lambda, Map, Filter functions in python

Welcome to my Blog, In this article, we will learn python lambda function, Map function, and filter function.

Lambda function in python: Lambda is a one line anonymous function and lambda takes any number of arguments but can only have one expression and python lambda syntax is

Syntax: x = lambda arguments : expression

Now i will show you some python lambda function examples:

#python #anonymous function python #filter function in python #lambda #lambda python 3 #map python #python filter #python filter lambda #python lambda #python lambda examples #python map

Ray  Patel

Ray Patel

1619518440

top 30 Python Tips and Tricks for Beginners

Welcome to my Blog , In this article, you are going to learn the top 10 python tips and tricks.

1) swap two numbers.

2) Reversing a string in Python.

3) Create a single string from all the elements in list.

4) Chaining Of Comparison Operators.

5) Print The File Path Of Imported Modules.

6) Return Multiple Values From Functions.

7) Find The Most Frequent Value In A List.

8) Check The Memory Usage Of An Object.

#python #python hacks tricks #python learning tips #python programming tricks #python tips #python tips and tricks #python tips and tricks advanced #python tips and tricks for beginners #python tips tricks and techniques #python tutorial #tips and tricks in python #tips to learn python #top 30 python tips and tricks for beginners

Shardul Bhatt

Shardul Bhatt

1626775355

Why use Python for Software Development

No programming language is pretty much as diverse as Python. It enables building cutting edge applications effortlessly. Developers are as yet investigating the full capability of end-to-end Python development services in various areas. 

By areas, we mean FinTech, HealthTech, InsureTech, Cybersecurity, and that's just the beginning. These are New Economy areas, and Python has the ability to serve every one of them. The vast majority of them require massive computational abilities. Python's code is dynamic and powerful - equipped for taking care of the heavy traffic and substantial algorithmic capacities. 

Programming advancement is multidimensional today. Endeavor programming requires an intelligent application with AI and ML capacities. Shopper based applications require information examination to convey a superior client experience. Netflix, Trello, and Amazon are genuine instances of such applications. Python assists with building them effortlessly. 

5 Reasons to Utilize Python for Programming Web Apps 

Python can do such numerous things that developers can't discover enough reasons to admire it. Python application development isn't restricted to web and enterprise applications. It is exceptionally adaptable and superb for a wide range of uses.

Robust frameworks 

Python is known for its tools and frameworks. There's a structure for everything. Django is helpful for building web applications, venture applications, logical applications, and mathematical processing. Flask is another web improvement framework with no conditions. 

Web2Py, CherryPy, and Falcon offer incredible capabilities to customize Python development services. A large portion of them are open-source frameworks that allow quick turn of events. 

Simple to read and compose 

Python has an improved sentence structure - one that is like the English language. New engineers for Python can undoubtedly understand where they stand in the development process. The simplicity of composing allows quick application building. 

The motivation behind building Python, as said by its maker Guido Van Rossum, was to empower even beginner engineers to comprehend the programming language. The simple coding likewise permits developers to roll out speedy improvements without getting confused by pointless subtleties. 

Utilized by the best 

Alright - Python isn't simply one more programming language. It should have something, which is the reason the business giants use it. Furthermore, that too for different purposes. Developers at Google use Python to assemble framework organization systems, parallel information pusher, code audit, testing and QA, and substantially more. Netflix utilizes Python web development services for its recommendation algorithm and media player. 

Massive community support 

Python has a steadily developing community that offers enormous help. From amateurs to specialists, there's everybody. There are a lot of instructional exercises, documentation, and guides accessible for Python web development solutions. 

Today, numerous universities start with Python, adding to the quantity of individuals in the community. Frequently, Python designers team up on various tasks and help each other with algorithmic, utilitarian, and application critical thinking. 

Progressive applications 

Python is the greatest supporter of data science, Machine Learning, and Artificial Intelligence at any enterprise software development company. Its utilization cases in cutting edge applications are the most compelling motivation for its prosperity. Python is the second most well known tool after R for data analytics.

The simplicity of getting sorted out, overseeing, and visualizing information through unique libraries makes it ideal for data based applications. TensorFlow for neural networks and OpenCV for computer vision are two of Python's most well known use cases for Machine learning applications.

Summary

Thinking about the advances in programming and innovation, Python is a YES for an assorted scope of utilizations. Game development, web application development services, GUI advancement, ML and AI improvement, Enterprise and customer applications - every one of them uses Python to its full potential. 

The disadvantages of Python web improvement arrangements are regularly disregarded by developers and organizations because of the advantages it gives. They focus on quality over speed and performance over blunders. That is the reason it's a good idea to utilize Python for building the applications of the future.

#python development services #python development company #python app development #python development #python in web development #python software development

A Wrapper for Sembast and SQFlite to Enable Easy

FHIR_DB

This is really just a wrapper around Sembast_SQFLite - so all of the heavy lifting was done by Alex Tekartik. I highly recommend that if you have any questions about working with this package that you take a look at Sembast. He's also just a super nice guy, and even answered a question for me when I was deciding which sembast version to use. As usual, ResoCoder also has a good tutorial.

I have an interest in low-resource settings and thus a specific reason to be able to store data offline. To encourage this use, there are a number of other packages I have created based around the data format FHIR. FHIR® is the registered trademark of HL7 and is used with the permission of HL7. Use of the FHIR trademark does not constitute endorsement of this product by HL7.

Using the Db

So, while not absolutely necessary, I highly recommend that you use some sort of interface class. This adds the benefit of more easily handling errors, plus if you change to a different database in the future, you don't have to change the rest of your app, just the interface.

I've used something like this in my projects:

class IFhirDb {
  IFhirDb();
  final ResourceDao resourceDao = ResourceDao();

  Future<Either<DbFailure, Resource>> save(Resource resource) async {
    Resource resultResource;
    try {
      resultResource = await resourceDao.save(resource);
    } catch (error) {
      return left(DbFailure.unableToSave(error: error.toString()));
    }
    return right(resultResource);
  }

  Future<Either<DbFailure, List<Resource>>> returnListOfSingleResourceType(
      String resourceType) async {
    List<Resource> resultList;
    try {
      resultList =
          await resourceDao.getAllSortedById(resourceType: resourceType);
    } catch (error) {
      return left(DbFailure.unableToObtainList(error: error.toString()));
    }
    return right(resultList);
  }

  Future<Either<DbFailure, List<Resource>>> searchFunction(
      String resourceType, String searchString, String reference) async {
    List<Resource> resultList;
    try {
      resultList =
          await resourceDao.searchFor(resourceType, searchString, reference);
    } catch (error) {
      return left(DbFailure.unableToObtainList(error: error.toString()));
    }
    return right(resultList);
  }
}

I like this because in case there's an i/o error or something, it won't crash your app. Then, you can call this interface in your app like the following:

final patient = Patient(
    resourceType: 'Patient',
    name: [HumanName(text: 'New Patient Name')],
    birthDate: Date(DateTime.now()),
);

final saveResult = await IFhirDb().save(patient);

This will save your newly created patient to the locally embedded database.

IMPORTANT: this database will expect that all previously created resources have an id. When you save a resource, it will check to see if that resource type has already been stored. (Each resource type is saved in it's own store in the database). It will then check if there is an ID. If there's no ID, it will create a new one for that resource (along with metadata on version number and creation time). It will save it, and return the resource. If it already has an ID, it will copy the the old version of the resource into a _history store. It will then update the metadata of the new resource and save that version into the appropriate store for that resource. If, for instance, we have a previously created patient:

{
    "resourceType": "Patient",
    "id": "fhirfli-294057507-6811107",
    "meta": {
        "versionId": "1",
        "lastUpdated": "2020-10-16T19:41:28.054369Z"
    },
    "name": [
        {
            "given": ["New"],
            "family": "Patient"
        }
    ],
    "birthDate": "2020-10-16"
}

And we update the last name to 'Provider'. The above version of the patient will be kept in _history, while in the 'Patient' store in the db, we will have the updated version:

{
    "resourceType": "Patient",
    "id": "fhirfli-294057507-6811107",
    "meta": {
        "versionId": "2",
        "lastUpdated": "2020-10-16T19:45:07.316698Z"
    },
    "name": [
        {
            "given": ["New"],
            "family": "Provider"
        }
    ],
    "birthDate": "2020-10-16"
}

This way we can keep track of all previous version of all resources (which is obviously important in medicine).

For most of the interactions (saving, deleting, etc), they work the way you'd expect. The only difference is search. Because Sembast is NoSQL, we can search on any of the fields in a resource. If in our interface class, we have the following function:

  Future<Either<DbFailure, List<Resource>>> searchFunction(
      String resourceType, String searchString, String reference) async {
    List<Resource> resultList;
    try {
      resultList =
          await resourceDao.searchFor(resourceType, searchString, reference);
    } catch (error) {
      return left(DbFailure.unableToObtainList(error: error.toString()));
    }
    return right(resultList);
  }

You can search for all immunizations of a certain patient:

searchFunction(
        'Immunization', 'patient.reference', 'Patient/$patientId');

This function will search through all entries in the 'Immunization' store. It will look at all 'patient.reference' fields, and return any that match 'Patient/$patientId'.

The last thing I'll mention is that this is a password protected db, using AES-256 encryption (although it can also use Salsa20). Anytime you use the db, you have the option of using a password for encryption/decryption. Remember, if you setup the database using encryption, you will only be able to access it using that same password. When you're ready to change the password, you will need to call the update password function. If we again assume we created a change password method in our interface, it might look something like this:

class IFhirDb {
  IFhirDb();
  final ResourceDao resourceDao = ResourceDao();
  ...
    Future<Either<DbFailure, Unit>> updatePassword(String oldPassword, String newPassword) async {
    try {
      await resourceDao.updatePw(oldPassword, newPassword);
    } catch (error) {
      return left(DbFailure.unableToUpdatePassword(error: error.toString()));
    }
    return right(Unit);
  }

You don't have to use a password, and in that case, it will save the db file as plain text. If you want to add a password later, it will encrypt it at that time.

General Store

After using this for a while in an app, I've realized that it needs to be able to store data apart from just FHIR resources, at least on occasion. For this, I've added a second class for all versions of the database called GeneralDao. This is similar to the ResourceDao, but fewer options. So, in order to save something, it would look like this:

await GeneralDao().save('password', {'new':'map'});
await GeneralDao().save('password', {'new':'map'}, 'key');

The difference between these two options is that the first one will generate a key for the map being stored, while the second will store the map using the key provided. Both will return the key after successfully storing the map.

Other functions available include:

// deletes everything in the general store
await GeneralDao().deleteAllGeneral('password'); 

// delete specific entry
await GeneralDao().delete('password','key'); 

// returns map with that key
await GeneralDao().find('password', 'key'); 

FHIR® is a registered trademark of Health Level Seven International (HL7) and its use does not constitute an endorsement of products by HL7®

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add fhir_db

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  fhir_db: ^0.4.3

Alternatively, your editor might support or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:fhir_db/dstu2.dart';
import 'package:fhir_db/dstu2/fhir_db.dart';
import 'package:fhir_db/dstu2/general_dao.dart';
import 'package:fhir_db/dstu2/resource_dao.dart';
import 'package:fhir_db/encrypt/aes.dart';
import 'package:fhir_db/encrypt/salsa.dart';
import 'package:fhir_db/r4.dart';
import 'package:fhir_db/r4/fhir_db.dart';
import 'package:fhir_db/r4/general_dao.dart';
import 'package:fhir_db/r4/resource_dao.dart';
import 'package:fhir_db/r5.dart';
import 'package:fhir_db/r5/fhir_db.dart';
import 'package:fhir_db/r5/general_dao.dart';
import 'package:fhir_db/r5/resource_dao.dart';
import 'package:fhir_db/stu3.dart';
import 'package:fhir_db/stu3/fhir_db.dart';
import 'package:fhir_db/stu3/general_dao.dart';
import 'package:fhir_db/stu3/resource_dao.dart'; 

example/lib/main.dart

import 'package:fhir/r4.dart';
import 'package:fhir_db/r4.dart';
import 'package:flutter/material.dart';
import 'package:test/test.dart';

Future<void> main() async {
  WidgetsFlutterBinding.ensureInitialized();

  final resourceDao = ResourceDao();

  // await resourceDao.updatePw('newPw', null);
  await resourceDao.deleteAllResources(null);

  group('Playing with passwords', () {
    test('Playing with Passwords', () async {
      final patient = Patient(id: Id('1'));

      final saved = await resourceDao.save(null, patient);

      await resourceDao.updatePw(null, 'newPw');
      final search1 = await resourceDao.find('newPw',
          resourceType: R4ResourceType.Patient, id: Id('1'));
      expect(saved, search1[0]);

      await resourceDao.updatePw('newPw', 'newerPw');
      final search2 = await resourceDao.find('newerPw',
          resourceType: R4ResourceType.Patient, id: Id('1'));
      expect(saved, search2[0]);

      await resourceDao.updatePw('newerPw', null);
      final search3 = await resourceDao.find(null,
          resourceType: R4ResourceType.Patient, id: Id('1'));
      expect(saved, search3[0]);

      await resourceDao.deleteAllResources(null);
    });
  });

  final id = Id('12345');
  group('Saving Things:', () {
    test('Save Patient', () async {
      final humanName = HumanName(family: 'Atreides', given: ['Duke']);
      final patient = Patient(id: id, name: [humanName]);
      final saved = await resourceDao.save(null, patient);

      expect(saved.id, id);

      expect((saved as Patient).name?[0], humanName);
    });

    test('Save Organization', () async {
      final organization = Organization(id: id, name: 'FhirFli');
      final saved = await resourceDao.save(null, organization);

      expect(saved.id, id);

      expect((saved as Organization).name, 'FhirFli');
    });

    test('Save Observation1', () async {
      final observation1 = Observation(
        id: Id('obs1'),
        code: CodeableConcept(text: 'Observation #1'),
        effectiveDateTime: FhirDateTime(DateTime(1981, 09, 18)),
      );
      final saved = await resourceDao.save(null, observation1);

      expect(saved.id, Id('obs1'));

      expect((saved as Observation).code.text, 'Observation #1');
    });

    test('Save Observation1 Again', () async {
      final observation1 = Observation(
          id: Id('obs1'),
          code: CodeableConcept(text: 'Observation #1 - Updated'));
      final saved = await resourceDao.save(null, observation1);

      expect(saved.id, Id('obs1'));

      expect((saved as Observation).code.text, 'Observation #1 - Updated');

      expect(saved.meta?.versionId, Id('2'));
    });

    test('Save Observation2', () async {
      final observation2 = Observation(
        id: Id('obs2'),
        code: CodeableConcept(text: 'Observation #2'),
        effectiveDateTime: FhirDateTime(DateTime(1981, 09, 18)),
      );
      final saved = await resourceDao.save(null, observation2);

      expect(saved.id, Id('obs2'));

      expect((saved as Observation).code.text, 'Observation #2');
    });

    test('Save Observation3', () async {
      final observation3 = Observation(
        id: Id('obs3'),
        code: CodeableConcept(text: 'Observation #3'),
        effectiveDateTime: FhirDateTime(DateTime(1981, 09, 18)),
      );
      final saved = await resourceDao.save(null, observation3);

      expect(saved.id, Id('obs3'));

      expect((saved as Observation).code.text, 'Observation #3');
    });
  });

  group('Finding Things:', () {
    test('Find 1st Patient', () async {
      final search = await resourceDao.find(null,
          resourceType: R4ResourceType.Patient, id: id);
      final humanName = HumanName(family: 'Atreides', given: ['Duke']);

      expect(search.length, 1);

      expect((search[0] as Patient).name?[0], humanName);
    });

    test('Find 3rd Observation', () async {
      final search = await resourceDao.find(null,
          resourceType: R4ResourceType.Observation, id: Id('obs3'));

      expect(search.length, 1);

      expect(search[0].id, Id('obs3'));

      expect((search[0] as Observation).code.text, 'Observation #3');
    });

    test('Find All Observations', () async {
      final search = await resourceDao.getResourceType(
        null,
        resourceTypes: [R4ResourceType.Observation],
      );

      expect(search.length, 3);

      final idList = [];
      for (final obs in search) {
        idList.add(obs.id.toString());
      }

      expect(idList.contains('obs1'), true);

      expect(idList.contains('obs2'), true);

      expect(idList.contains('obs3'), true);
    });

    test('Find All (non-historical) Resources', () async {
      final search = await resourceDao.getAll(null);

      expect(search.length, 5);
      final patList = search.toList();
      final orgList = search.toList();
      final obsList = search.toList();
      patList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Patient);
      orgList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Organization);
      obsList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Observation);

      expect(patList.length, 1);

      expect(orgList.length, 1);

      expect(obsList.length, 3);
    });
  });

  group('Deleting Things:', () {
    test('Delete 2nd Observation', () async {
      await resourceDao.delete(
          null, null, R4ResourceType.Observation, Id('obs2'), null, null);

      final search = await resourceDao.getResourceType(
        null,
        resourceTypes: [R4ResourceType.Observation],
      );

      expect(search.length, 2);

      final idList = [];
      for (final obs in search) {
        idList.add(obs.id.toString());
      }

      expect(idList.contains('obs1'), true);

      expect(idList.contains('obs2'), false);

      expect(idList.contains('obs3'), true);
    });

    test('Delete All Observations', () async {
      await resourceDao.deleteSingleType(null,
          resourceType: R4ResourceType.Observation);

      final search = await resourceDao.getAll(null);

      expect(search.length, 2);

      final patList = search.toList();
      final orgList = search.toList();
      patList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Patient);
      orgList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Organization);

      expect(patList.length, 1);

      expect(patList.length, 1);
    });

    test('Delete All Resources', () async {
      await resourceDao.deleteAllResources(null);

      final search = await resourceDao.getAll(null);

      expect(search.length, 0);
    });
  });

  group('Password - Saving Things:', () {
    test('Save Patient', () async {
      await resourceDao.updatePw(null, 'newPw');
      final humanName = HumanName(family: 'Atreides', given: ['Duke']);
      final patient = Patient(id: id, name: [humanName]);
      final saved = await resourceDao.save('newPw', patient);

      expect(saved.id, id);

      expect((saved as Patient).name?[0], humanName);
    });

    test('Save Organization', () async {
      final organization = Organization(id: id, name: 'FhirFli');
      final saved = await resourceDao.save('newPw', organization);

      expect(saved.id, id);

      expect((saved as Organization).name, 'FhirFli');
    });

    test('Save Observation1', () async {
      final observation1 = Observation(
        id: Id('obs1'),
        code: CodeableConcept(text: 'Observation #1'),
        effectiveDateTime: FhirDateTime(DateTime(1981, 09, 18)),
      );
      final saved = await resourceDao.save('newPw', observation1);

      expect(saved.id, Id('obs1'));

      expect((saved as Observation).code.text, 'Observation #1');
    });

    test('Save Observation1 Again', () async {
      final observation1 = Observation(
          id: Id('obs1'),
          code: CodeableConcept(text: 'Observation #1 - Updated'));
      final saved = await resourceDao.save('newPw', observation1);

      expect(saved.id, Id('obs1'));

      expect((saved as Observation).code.text, 'Observation #1 - Updated');

      expect(saved.meta?.versionId, Id('2'));
    });

    test('Save Observation2', () async {
      final observation2 = Observation(
        id: Id('obs2'),
        code: CodeableConcept(text: 'Observation #2'),
        effectiveDateTime: FhirDateTime(DateTime(1981, 09, 18)),
      );
      final saved = await resourceDao.save('newPw', observation2);

      expect(saved.id, Id('obs2'));

      expect((saved as Observation).code.text, 'Observation #2');
    });

    test('Save Observation3', () async {
      final observation3 = Observation(
        id: Id('obs3'),
        code: CodeableConcept(text: 'Observation #3'),
        effectiveDateTime: FhirDateTime(DateTime(1981, 09, 18)),
      );
      final saved = await resourceDao.save('newPw', observation3);

      expect(saved.id, Id('obs3'));

      expect((saved as Observation).code.text, 'Observation #3');
    });
  });

  group('Password - Finding Things:', () {
    test('Find 1st Patient', () async {
      final search = await resourceDao.find('newPw',
          resourceType: R4ResourceType.Patient, id: id);
      final humanName = HumanName(family: 'Atreides', given: ['Duke']);

      expect(search.length, 1);

      expect((search[0] as Patient).name?[0], humanName);
    });

    test('Find 3rd Observation', () async {
      final search = await resourceDao.find('newPw',
          resourceType: R4ResourceType.Observation, id: Id('obs3'));

      expect(search.length, 1);

      expect(search[0].id, Id('obs3'));

      expect((search[0] as Observation).code.text, 'Observation #3');
    });

    test('Find All Observations', () async {
      final search = await resourceDao.getResourceType(
        'newPw',
        resourceTypes: [R4ResourceType.Observation],
      );

      expect(search.length, 3);

      final idList = [];
      for (final obs in search) {
        idList.add(obs.id.toString());
      }

      expect(idList.contains('obs1'), true);

      expect(idList.contains('obs2'), true);

      expect(idList.contains('obs3'), true);
    });

    test('Find All (non-historical) Resources', () async {
      final search = await resourceDao.getAll('newPw');

      expect(search.length, 5);
      final patList = search.toList();
      final orgList = search.toList();
      final obsList = search.toList();
      patList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Patient);
      orgList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Organization);
      obsList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Observation);

      expect(patList.length, 1);

      expect(orgList.length, 1);

      expect(obsList.length, 3);
    });
  });

  group('Password - Deleting Things:', () {
    test('Delete 2nd Observation', () async {
      await resourceDao.delete(
          'newPw', null, R4ResourceType.Observation, Id('obs2'), null, null);

      final search = await resourceDao.getResourceType(
        'newPw',
        resourceTypes: [R4ResourceType.Observation],
      );

      expect(search.length, 2);

      final idList = [];
      for (final obs in search) {
        idList.add(obs.id.toString());
      }

      expect(idList.contains('obs1'), true);

      expect(idList.contains('obs2'), false);

      expect(idList.contains('obs3'), true);
    });

    test('Delete All Observations', () async {
      await resourceDao.deleteSingleType('newPw',
          resourceType: R4ResourceType.Observation);

      final search = await resourceDao.getAll('newPw');

      expect(search.length, 2);

      final patList = search.toList();
      final orgList = search.toList();
      patList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Patient);
      orgList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Organization);

      expect(patList.length, 1);

      expect(patList.length, 1);
    });

    test('Delete All Resources', () async {
      await resourceDao.deleteAllResources('newPw');

      final search = await resourceDao.getAll('newPw');

      expect(search.length, 0);

      await resourceDao.updatePw('newPw', null);
    });
  });
} 

Download Details:

Author: MayJuun

Source Code: https://github.com/MayJuun/fhir/tree/main/fhir_db

#sqflite  #dart  #flutter