Intro

I recently had to convert a deep learning model (a MobileNetV2 variant) from PyTorch to TensorFlow Lite. It was a long, complicated journey, involved jumping through a lot of hoops to make it work. I found myself collecting pieces of information from Stackoverflow posts and GitHub issues. My goal is to share my experience in an attempt to help someone else who is lost like I was.

DISCLAIMER: This is not a guide_ on how to properly do this conversion. I only wish to share my experience. I might have done it wrong (especially because I have no experience with Tensorflow). If you notice something that I could have done better/differently — please comment and I’ll update the post accordingly._

The Mission

Convert a deep learning model (a MobileNetV2 variant) from Pytorch to TensorFlow Lite. The conversion process should be:

Pytorch →ONNX → Tensorflow → TFLite

Tests

In order to test the converted models, a set of roughly 1,000 input tensors was generated, and the PyTorch model’s output was calculated for each. That set was later used to test each of the converted models, by comparing their yielded outputs against the original outputs, via a mean error metric, over the entire set. The mean error reflects how different are the converted model outputs compared to the original PyTorch model outputs, over the same input.

I decided to treat a model with a mean error smaller than 1e-6 as a successfully converted model.

It might also be important to note that I added the batch dimension in the tensor, even though it was 1. I had no reason doing so other than a hunch that comes from my previous experience converting PyTorch to DLC models.

#mlops #tensorflow #onnx #pytorch #tflite

My Journey in Converting PyTorch to TensorFlow Lite
22.50 GEEK