What is Tensorflow Serving?

One of the features that I personally think is undervalued from Tensorflow is the capability of serving Tensorflow models. At the moment of writing this post, the API that helps you do that is named Tensorflow Serving, and is part of the Tensorflow Extended ecosystem, or TFX for short.

During the first releases of Tensorflow Serving, I found the documentation somehow daunting. And there were many concepts that a data scientist is not very used to work with: Servables, sources, loaders, managers… All these elements are part of the Tensorflow Serving architecture.

So, as a gentle introduction, I will show you how you can build a REST API with Tensorflow Serving. From saving a Tensorflow object (this is what we call a servable) until testing the API endpoint. The first part will explain how to create and save Tensorflow objects ready to be put into production.

The meaning of servables

Functions, embeddings or saved models are some of the objects that can be used as servables. But how do we define those servables in Tensorflow?

Well, this is up to you but they must be able to be saved in what’s is called the SavedModel format. This format keeps all the components of a Tensorflow object in the same state once we load this object in a new environment. What are these components? The relevant ones are: Weights, graph, additional assets, etc.

The module to be called to save Tensorflow objects is tf.saved_model. And as we’ll see shortly it’s simple to use. For now, let’s see how we can generate two types of servables:

  • Tensorflow functions
  • Keras models

#api #keras #python #tensorflow

How to Build a REST API with Tensorflow Serving (Part 1)
8.45 GEEK