As the demand for data-driven products grows, the data science community has been rapidly developing solutions that allow us to create and apply all the recent revolutionary advances in artificial intelligence across multiple platforms. In the early years of the so-called AI era, it was very common to have a deep learning model running on a script. But as the scope of our problems and their requirements evolved, these frameworks were ported onto various platforms such as IoT devices, mobile devices, and the browser.

To answer the demand for a battle-proven and browser-centric solution, in March 2018, the TensorFlow team released  TensorFlow.js, a library aimed at web and Javascript developers to develop and train machine learning models in Javascript and deploy them right in a browser.

Like its larger and more complete counterpart, TensorFlow.js provides many tools and off-the-shelf models such as MobileNet that simplify the already complicated and time-consuming task of training a deep learning model from scratch. First, it provides  the means to convert pre-trained TensorFlow models from Python into TensorFlow.js format, supports transfer learning — a technique for fine-tuning pre-existing models with the small amount of custom data — and even a way to create ML solutions without having to deal with the low-level implementations through the library ml5.js.

In this article I will use MobileNetV1 from tfhub.dev. MobileNets are a class of small, low-latency, low-power models that can be used for classification, detection, and other common tasks for which convolutional neural networks are suitable. Due to their small size, they are considered excellent deep learning models for use on mobile devices.

As a quick comparison, the size of the full VGG16 network on disk is about 553 megabytes. One of the largest MobileNet networks currently is around 17 megabytes in size, so that’s a huge difference, especially when you’re thinking of deploying a model in a mobile app or running it in a browser.

╔═══════════╦════════════════════╦═══════════════╗
║ Model     ║ Size               ║ Parameters    ║
╠═══════════╬════════════════════╬═══════════════╣
║ VGG16     ║ 553 MB             ║ 138,000,000   ║
║ Mobile Net║ 17 MB              ║ 4,200,000     ║
╚═══════════╩════════════════════╩═══════════════╝

This huge difference in size is due to the number of parameters within these networks. For example, VGG16 has 138 million parameters, while the 17MB MobileNet mentioned above has only 4.2 million.

Now, while MobileNets are faster and smaller than other large networks, like VGG16, for example, there is a tradeoff. That tradeoff is accuracy, but don’t let let that disappoint you.

Yes, MobileNets are usually not as accurate as these other large, resource-demanding models, but they still actually perform very well, albeit with a relatively small reduction in accuracy. Here is a MobileNets paper that elaborates further on this tradeoff if you’re interested in studying this further.

#tensorflowjs #deep-learning #data-science #image-classification

Number Hand Gestures Recognition using TensorFlow.js
8.40 GEEK