Using Google AutoML and React, I was able to set up a client-side object detection app, without any custom model code.

Neural-network-based object detection is a powerful technique that’s getting easier and easier to take advantage of. With Google’s  Cloud AutoML computer vision service (as well as similar services like Microsoft’s  Custom Vision), it’s now simple and cheap to train a powerful object detection model and deploy it as a client-side React app. And best of all, you don’t need to hire a data scientist to do it — the model training is code-free, so any application developer can train a model and focus on doing what they do best, which is building useful and fun applications!

Given the current state of the world with COVID-19, I thought it would be an interesting test case to try building a mask detector — something that could take a video stream, and report back on the locations of people in a frame who are wearing masks, and of those who aren’t. This could potentially be pretty useful to deploy in businesses trying to enforce mask mandates, and ride sharing services are already using something similar to check that drivers and riders are wearing masks.

This seemed like a great idea, except for the small detail that I didn’t actually know how to do that. Luckily I work at  AE Studio, where we take an Agile approach to building both traditional software applications and data science solutions for our clients. So I talked with AE’s head of data science,  Mr. Deep Learning himself, Ed Chen, to help me figure out what the simplest MVP could be.

What we found was that it’s now surprisingly simple for a single developer to build a high-quality on-device object detection system, without any special knowledge or large data sets. This app doesn’t snitch on anyone — it keeps all data on the client, and reacts when it detects a masked or unmasked face.

If you don’t want to peek behind the curtain, then feel free to skip ahead to check out our full-fledged mask detector at  doctormasky.com. Or, if you want to skip the explanations and jump into the code, the repo is on GitHub. Otherwise, read on!

End-to-end workflow

The end-to-end workflow for building a client-side object detector goes like this:

  1. Sign up for a Google AutoML account
  2. Find example images of the objects you want to detect. You can find these online, with a public dataset, or by taking them yourself.
  3. Upload the images to a Google Storage bucket, and label the dataset by drawing bounding boxes around the objects in the images.
  4. Google then uses that labeled data to create a model.
  5. You can deploy that model as endpoint to send images to. Or in this case, you can export that model to another Google Storage bucket, and use it for on-device detection within a webapp.

#automl-vision #react #object-detection #typescript

I’m not a data scientist but made a COVID mask detector with Google AutoML and React 
1.55 GEEK