Deploying TensorFlow Models at the Edge with NVIDIA Jetson Nano

In this tutorial, we will explore the idea of running TensorFlow models as microservices at the edge. Jetson Nano, a powerful edge computing device will run the K3s distribution from Rancher Labs. It can be a single node K3s cluster or join an existing K3s cluster just as an agent.

For background, refer to my previous article on Jetson Nano and configuring it as an AI testbed.

For the completeness of the tutorial, we will run a single node K3s on Jetson Nano. If you want to turn that into an agent, follow the steps covered in one of the previous articles from the K3s series.

Step 1: Configure Docker Runtime

The Jetson platform from NVIDIA runs a flavor of Debian called L4T (Linux for Tegra) which is based on Ubuntu 18.04. The OS along with the CUDA-X drivers and SDKs is packaged into JetPack, a comprehensive software stack for the Jetson family of products such as Jetson Nano and Jetson Xavier.

Starting with JetPack 4.2, NVIDIA has introduced a container runtime with Docker integration. This custom runtime enables Docker containers to access the underlying GPUs available in the Jetson family.

Start by downloading the most recent version of JetPack and flash your Jetson Nano device with it.

Check the version of Docker runtime with the below command:

1

nvidia - docker version

.

#machine learning #technology #feature #tutorial

What is GEEK

Buddha Community

 Deploying TensorFlow Models at the Edge with NVIDIA Jetson Nano
Michael  Hamill

Michael Hamill

1617331277

Workshop Alert! Deep Learning Model Deployment & Management

The Association of Data Scientists (AdaSci), the premier global professional body of data science and ML practitioners, has announced a hands-on workshop on deep learning model deployment on February 6, Saturday.

Over the last few years, the applications of deep learning models have increased exponentially, with use cases ranging from automated driving, fraud detection, healthcare, voice assistants, machine translation and text generation.

Typically, when data scientists start machine learning model development, they mostly focus on the algorithms to use, feature engineering process, and hyperparameters to make the model more accurate. However, model deployment is the most critical step in the machine learning pipeline. As a matter of fact, models can only be beneficial to a business if deployed and managed correctly. Model deployment or management is probably the most under discussed topic.

In this workshop, the attendees get to learn about ML lifecycle, from gathering data to the deployment of models. Researchers and data scientists can build a pipeline to log and deploy machine learning models. Alongside, they will be able to learn about the challenges associated with machine learning models in production and handling different toolkits to track and monitor these models once deployed.

#hands on deep learning #machine learning model deployment #machine learning models #model deployment #model deployment workshop

Justice  Reilly

Justice Reilly

1595294400

Deploying Machine learning models using Flask on your website

Understanding of Machine Learning using Python (sklearn)
Basics of Flask
Basics of HTML,CSS

#machine-learning #deployment #ml-model-deployment #flask #deploying

Dominic  Feeney

Dominic Feeney

1624480080

A library for training and deploying machine learning models on Amazon SageMaker

SageMaker Python SDK

SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker.

With the SDK, you can train and deploy models using popular deep learning frameworks Apache MXNet and TensorFlow. You can also train and deploy models with Amazon algorithms, which are scalable implementations of core machine learning algorithms that are optimized for SageMaker and GPU training. If you have your own algorithms built into SageMaker compatible Docker containers, you can train and host models using these as well.

View Documentation View Github

Installing the SageMaker Python SDK

The SageMaker Python SDK is built to PyPI and can be installed with pip as follows:

pip install sagemaker

You can install from source by cloning this repository and running a pip install command in the root directory of the repository:

git clone https://github.com/aws/sagemaker-python-sdk.git
cd sagemaker-python-sdk
pip install .

#machine learning #models #aws #tensorflow #a library for training and deploying machine learning models on amazon sagemaker #amazon sagemaker

Deploying Models to Production with TensorFlow Model Server

Model creation is definitely an important part of AI applications but it is very important to also know what after training. I will be showing how you could serve TensorFlow models over HTTP and HTTPS and do things like model versioning or model server maintenance easily with TF Model Server. You will also see the steps required for this and the process you should follow. We will also take a look at Kubernetes and GKE to autoscale your deployments.

#deep-learning #tensorflow-serving #artificial-intelligence #tensorflow #machine-learning

Deploying TensorFlow Models at the Edge with NVIDIA Jetson Nano

In this tutorial, we will explore the idea of running TensorFlow models as microservices at the edge. Jetson Nano, a powerful edge computing device will run the K3s distribution from Rancher Labs. It can be a single node K3s cluster or join an existing K3s cluster just as an agent.

For background, refer to my previous article on Jetson Nano and configuring it as an AI testbed.

For the completeness of the tutorial, we will run a single node K3s on Jetson Nano. If you want to turn that into an agent, follow the steps covered in one of the previous articles from the K3s series.

Step 1: Configure Docker Runtime

The Jetson platform from NVIDIA runs a flavor of Debian called L4T (Linux for Tegra) which is based on Ubuntu 18.04. The OS along with the CUDA-X drivers and SDKs is packaged into JetPack, a comprehensive software stack for the Jetson family of products such as Jetson Nano and Jetson Xavier.

Starting with JetPack 4.2, NVIDIA has introduced a container runtime with Docker integration. This custom runtime enables Docker containers to access the underlying GPUs available in the Jetson family.

Start by downloading the most recent version of JetPack and flash your Jetson Nano device with it.

Check the version of Docker runtime with the below command:

1

nvidia - docker version

.

#machine learning #technology #feature #tutorial