Sagemaker is a fully managed machine learning service,which provides you support to build models using built-in-algorithms, with native support for bring-your-own-algorithms and ML frameworks such as Apache MXNet, PyTorch, SparkML, Tensorflow and Scikit-Learn.

In this post, I’ll walk through how to deploy a machine learning model which is built and trained locally using custom algorithm, as a REST API using Sagemaker, lambda and docker.

I’ll break the process in to 5 steps

  • Step 1: Building the model and saving the artifacts.
  • Step 2: Defining the server and Inference code.
  • Step 3: Build a Sagemaker Container.
  • Step 4: Creating Model, Endpoint Configuration, and Endpoint.
  • Step 5: Invoking the model using a lambda with API Gateway trigger.

Step 1: Building the model and saving the artifacts.

Build the model and serialize the object, which is used for prediction. In this post, I’m using simple Linear Regression, one independent variable.

Once you serialize the python object to pickle file, save that artifact(pickle file) in tar.gz format and upload in a S3 bucket.

Image for post

Step 2: Defining the server and inference code.

When an endpoint is invoked Sagemaker interacts with the Docker container, which runs the inference code for hosting services and processes the request and returns the response.Containers need to implement a web server that responds to /invocations and /ping on port 8080.

Inference code in the container will receive GET requests from the infrastructure and it should respond to Sagemaker with an HTTP 200 status code and an empty body, which indicates that the container is ready to accept inference requests at invocations endpoint.

And invocations is the endpoint that receives POST requests and responds according to the format specified in the algorithm

To make the model REST API, you need Flask, which is WSGI(Web Server Gateway Interface) application framework, Gunicorn the WSGI server, and **nginx **the reverse-proxy and load balancer.

Code : https://github.com/NareshReddyy/Sagemaker_deploy_own_model.git

Step 3: Sagemaker Container.

Sagemaker uses docker containers extensively. You can put your scripts, algorithms, and inference code for your models in the containers, which includes the runtime, system tools, libraries and other code to deploy your models, which provides flexibility to run your own model.

You create Docker containers from images that are saved in a repository. You build the images from scripted instructions provided in a Dockerfile.

#mlops #sagemaker #aws-sagemaker #deploy-model #sagemaker-rest-api

Deploy your own model with AWS Sagemaker
5.80 GEEK