Introduction

In my last post, I covered some of the difficulties of working with custom neural networks in Lens Studio. This time, I’ll show you what it looks like, with a step-by-step guide for building and implementing your own custom neural network in a Snap Lens.

By the end of this post, you’ll know how to:

  1. Train your own custom segmentation model using Fritz AI Studio. I used face masks for my example, but you can pick whatever target object(s) you want.
  2. Incorporate your model into a Lens Studio project via an ML Component.
  3. Visualize real-time predictions from the model.
  4. Make a color overlay and an interactive color slider to change the appearance of the segmented object.

Part 1: Building a model with Fritz AI Studio

For a more in-depth look at working with Fritz AI Studio, you can check out our Quickstart Guide; or, for an applied use case, read through our end-to-end cat detector tutorial.

Image segmentation models (like other computer vision models) require a lot of labeled data for training. We could always collect and manually annotate data, but that can be incredibly time consuming, since we’d need to do this for thousands of images.

Fortunately, we can get started with a much smaller number of images, thanks to the synthetic data generation tool in Fritz AI Studio. This tool allows users to:

  • Easily upload and manually label a set of seed images.
  • Automatically apply image augmentation techniques that are specifically targeted for mobile devices.
  • From those seed images, programmatically generate ready-to-train datasets of thousands of images with accurately-labeled keypoints, bounding boxes, class labels, or segmentation masks — in a matter of minutes.

#heartbeat #machine-learning #mobile-machine-learning #snapchat #lens-studio #visual studio code

Building a Custom Face Mask Snapchat Lens with Fritz AI and Lens Studio
3.95 GEEK