Deploy and Manage TensorFlow Workloads at the Edge via Kubernetes K3s on a Jetson Nano

Via the nvidia-docker runtime, it's possible to easily deploy and manage TensorFlow workloads as K3s pods.

NVIDIA's Jetson Nano is a low-power, all-in-one device designed for running compute-hungry AI projects at the edge — and The New Stack's Janakiram MSV has published a guide designed to make that process easier by deploying TensorFlow models through the lightweight K3s Kubernetes distribution.

"Jetson Nano, a powerful edge computing device, will run the K3s distribution from Rancher Labs," MSV explains. "It can be a single node K3s cluster or join an existing K3s cluster just as an agent."

The process of getting K3s up and running to take advantage of the compute performance available in the Jetson Nano isn't, however, entirely straightforward. "The default container runtime in K3s is containerd, an industry-standard container runtime," MSV notes. "This means that Docker CE and K3s will not share the same configuration and images. For the AI workloads running in K3s, we need access to the GPU which is available only through the nvidia-docker runtime."

The tutorial walks through configuring Docker with the nvidia-docker runtime, installing K3s with the configuration set to use the existing Docker runtime rather than containerd, and then deploying TensorFlow as a pod in the single-node cluster. Naturally, it's also possible to expand the cluster — providing a low-cost and low-energy way to play around with GPU-accelerated cluster computing via Kubernetes.

The full tutorial is now available on The New Stack.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Get our weekly newsletter when you join Hackster.
Latest articles
Read more
Related articles