With TorchServe, Facebook and AWS continue to narrow the gap between Machine Learning research and production.

In recent years, PyTorch has largely overtaken Tensorflow as the machine learning model training framework that is preferred for research-leaning data scientists. There are a few reasons for this, but mainly that Pytorch is built for Python as its first-class language of use, whereas Tensorflow’s architecture stays much closer to its C/C++ core. Although both frameworks do have C/C++ cores, Pytorch does loads more to make its interface “ pythonic “. For those unfamiliar with the term, it basically means the code is easy to understand and doesn’t make you feel like an adversarial teacher wrote it for an exam question.

Pytorch’s cleaner interface has resulted in mass adoption for folks whose main priority is to quickly turn their planned analyses into actionable results. We don’t have time to play with a static computation graph API that makes me feel like a criminal for wanting to place a breakpoint to debug a tensor’s value. We need to prototype something and get moving onto the next experiment.

It’s Not All Sunshine & Rainbows

However, this advantage comes at a cost. With Pytorch, we can easily churn through experiment after experiment, tossing results over the fence to be put into production at a similar speed. Sadly, getting these models serving in production has been slower than the experimentation throughput due to a lack of production-ready frameworks that encapsulate away API complexity. At least, these frameworks have been missing for Pytorch.

Tensorflow has long had a truly impressive model serving framework, TFX. Truthfully, there’s not much missing in its framework, provided that you are knee-deep in the Tensorflow ecosystem. If you have a TF model and are using Google Cloud, use TFX until it breathes its last breath. If you are not in that camp, combining TFX and PyTorch has been anything but plug and play.

A New Hope

Fear no more! PyTorch’s 1.5 release brings the initial version of TorchServe as well as experimental support of TorchElastic with Kubernetes for large-scale model training. Software powerhouses Facebook and AWS continue to supercharge PyTorch’s capabilities and provide a competitive alternative to Google’s Tensorflow-based software pipelines.

TorchServe is a flexible and easy-to-use library for serving PyTorch models in production performantly at scale. It is cloud and environment agnostic and supports features such as multi-model serving, logging, metrics, and the creation of RESTful endpoints for application integration.

_- _PyTorch Docs

Let’s parse some of those qualities because they’re worth a focused repetition:

  • Serve PyTorch models in production performantly at scale
  • Cloud and environment agnostic
  • Multi-model serving
  • Logging
  • Metrics
  • RESTful endpoints

Any ML model in production that has these characteristics is bound to make your fellow engineers very happy.

#machine-learning #artificial-intelligence #data-science #pytorch #python

PyTorch Levels Up Its Serving Game with TorchServe
5.85 GEEK