_All the code used in this article is _here

Recently, PyTorch has introduced its new production framework to properly serve models, called torchserve.So, without further due, let’s present today’s roadmap:

  1. Installation with Docker
  2. Export your model
  3. Define a handler
  4. Serve our model

To showcase torchserve, we will serve a fully trained ResNet34 to perform image classification.

Installation with Docker

_Official doc _here

The best way to install torchserve is with docker. You just need to pull the image.

You can use the following command to save the latest image.

docker pull pytorch/torchserve:latest

All the tags are available here

More about docker and torchserve here

Handlers

_Official doc _here

Handlers are the ones responsible to make a prediction using your model from one or more HTTP requests.

Default handlers

Torchserve supports the following default handlers

  1. image_classifier
  2. object_detector
  3. text_classifier
  4. image_segmenter

But keep in mind that none of them supports batching requests!

Custom handlers

torchserve exposes a rich interface to do almost everything you want. An Handler is just a class that must have three functions

  • preprocess
  • inference
  • postprocess

You can create your own class or just subclassBaseHandler . The main advantage of subclasssing BaseHandler is to have the model loaded accessible at self.model . The following snippet shows how to subclass BaseHandler

Image for post

Subclassing BaseHandler to create your own handler

Going back to our image classification example. We need to

  • get the images from each request and preprocess them
  • get the prediction from the model
  • send back a response

#pytorch #data-science #deep-learning #data analysis

Deploy models and create custom handlers in Torchserve 🚀
24.20 GEEK