Posted on Leave a comment

AI Server And Hosted Jetson Use Cases

Let’s continue once again with our look at Edge AI Servers and why it makes sense to use Nvidia Jetson Nano’s and Xavier’s as small, cost-effective machine learning endpoints.

As many startups and developers already know, AI Servers from the major cloud vendors are very expensive.  Here is a quick sample of prices, at the time of this writing (July 2021), on for a few different EC2 platforms:

TypeHourly PriceCPURAMStorage
p3.2xlarge$3.06861 GiBEBS Only
g4dn.xlarge$0.53416 GiB125 GB NVMe SSD
g4ad.8xlarge$1.7332128 GiB1200 GB NVMe SSD
p3.8xlarge$12.2432244 GiBEBS Only
g3s.xlarge$0.75430.5 GiBEBS Only

P3 Instances include V100 Tensor Core GPUs, G4dn include T4 GPUs, and G4ad include Radeon Pro V520 GPUs.

Over on the Azure site, here are just a pair of the current pricing and options:

SizeHourly PricevCPUMemory: GiBTemp storage (SSD) GiBGPUGPU memory: GiB
Standard_NC4as_T4_v3$0.63428180116
Standard_NC6s_v3$3.986112736116

You can save some money if you commit to a 1-year or 3-year term, but the regular “On-Demand” prices start at $374.40 per month on AWS or $460 per month on Azure for the smallest server, and go up dramatically from there.  The p3.8xlarge for example, is currently nearly $9,000 per month.  Many others are in the $3,000 to $5,000 per month range.  

If your use case involves training a model or building an algorithm and then deploying it to Edge AI devices, the large amount of power provided by one of those machines could certainly make sense as the first step in the Edge AI deployment process.  Using the Nvidia Transfer Learning Toolkit as one example, the workflow consists of leveraging existing publicly available models, adding your own custom classes and dataset, training it on the big hardware such as V100 and T4 GPU’s, and then deploying the output and resulting model onto smaller devices such as Jetsons.  Testing your TLT output on a hosted Jetson, prior to deploying to all the devices out in the field, would be wise.  Thus, a hosted Nano or Xavier that is built into your overall CI/CD pipeline, to QA and ensure functionality prior to deployment to the whole fleet, is a great use case.

A second very important use-case for hosted Jetson devices is for learning, experimentation, and development purposes.  As we can see above, the cheapest EC2 instance is about $375 per month.  If you are just get started learning how to build and develop AI applications, exploring the DeepStream or CUDA ecosystem, or looking to work through various online learning courses on the topic, $375 probably doesn’t make much sense…but a smaller AI Server could be just right for the task.  Hosted Jetson Nano’s can absolutely be used to learn the fundamentals of computer vision, deep learning, neural networks, and other machine learning topics.  They are also good for learning about containers and pulling down images from the Nvidia NGC Catalog, such as PyTorch, TensorFlow, Rapids, and more.  Those types of smaller tasks, Hello AI World projects, and basic Getting Started projects are far more cost effective to run on Jetsons than on the big GPU servers costing 10x (at a minimum).

Leave a Reply

Your email address will not be published. Required fields are marked *