In this tutorial, we will explore the idea of running TensorFlow models as microservices at the edge. Jetson Nano, a powerful edge computing device will run the K3s distribution from Rancher Labs. It can be a single node K3s cluster or join an existing K3s cluster just as an agent.

For background, refer to my previous article on Jetson Nano and configuring it as an AI testbed.

For the completeness of the tutorial, we will run a single node K3s on Jetson Nano. If you want to turn that into an agent, follow the steps covered in one of the previous articles from the K3s series.

Step 1: Configure Docker Runtime

The Jetson platform from NVIDIA runs a flavor of Debian called L4T (Linux for Tegra) which is based on Ubuntu 18.04. The OS along with the CUDA-X drivers and SDKs is packaged into JetPack, a comprehensive software stack for the Jetson family of products such as Jetson Nano and Jetson Xavier.

Starting with JetPack 4.2, NVIDIA has introduced a container runtime with Docker integration. This custom runtime enables Docker containers to access the underlying GPUs available in the Jetson family.

Start by downloading the most recent version of JetPack and flash your Jetson Nano device with it.

Check the version of Docker runtime with the below command:

1

nvidia - docker version

.

#machine learning #technology #feature #tutorial

 Deploying TensorFlow Models at the Edge with NVIDIA Jetson Nano
4.05 GEEK