The easiest way to let people interact with your model is by employing AWS Lambda and API Gateway to setup an API POST request to the model.

Finally, having an API for sending requests and receiving inference responses from the model is great, but a simple and nice looking web app for users to interact with the data is even better. This is where hosting a static website with S3 and accessing the endpoint with AJAX comes into play.

All of these components of AWS working in unison allows for a sleek development and inference serving framework. This tutorial also covers using AWS CloudWatch to understand ambiguous errors such as 500 Internal Server Errors from the custom model.

Check out the repo for the some of the code mentioned below, here.

Docker Model

To deploy a custom model with SageMaker it must be wrapped by  SageMaker’s Estimator class. This can be done by creating a Docker image that interfaces with this class.

Also, check out this post, if you don’t know what Docker is and why it’s so important nowadays.

Install Docker

Install Docker for your respective OS with these links:  Mac,  Ubuntu and  Windows. Windows Home edition is a bit more difficult so I will cover some of the steps here, since Windows is the OS I use. Follow these steps first and see how far you get. Refer to the steps below if you get stuck.

WSL 2 is required in Windows Home edition to run Docker Desktop.

#docker #sagemaker #s3 #anomaly-detection #aws

Deploying a Custom Docker Model with SageMaker to a Serverless Front-end with S3
1.90 GEEK