Posted on Leave a comment

AI Server And Hosted Jetson Use Cases

Let’s continue once again with our look at Edge AI Servers and why it makes sense to use Nvidia Jetson Nano’s and Xavier’s as small, cost-effective machine learning endpoints.

As many startups and developers already know, AI Servers from the major cloud vendors are very expensive.  Here is a quick sample of prices, at the time of this writing (July 2021), on for a few different EC2 platforms:

TypeHourly PriceCPURAMStorage
p3.2xlarge$3.06861 GiBEBS Only
g4dn.xlarge$0.53416 GiB125 GB NVMe SSD
g4ad.8xlarge$1.7332128 GiB1200 GB NVMe SSD
p3.8xlarge$12.2432244 GiBEBS Only
g3s.xlarge$0.75430.5 GiBEBS Only

P3 Instances include V100 Tensor Core GPUs, G4dn include T4 GPUs, and G4ad include Radeon Pro V520 GPUs.

Over on the Azure site, here are just a pair of the current pricing and options:

SizeHourly PricevCPUMemory: GiBTemp storage (SSD) GiBGPUGPU memory: GiB
Standard_NC4as_T4_v3$0.63428180116
Standard_NC6s_v3$3.986112736116

You can save some money if you commit to a 1-year or 3-year term, but the regular “On-Demand” prices start at $374.40 per month on AWS or $460 per month on Azure for the smallest server, and go up dramatically from there.  The p3.8xlarge for example, is currently nearly $9,000 per month.  Many others are in the $3,000 to $5,000 per month range.  

If your use case involves training a model or building an algorithm and then deploying it to Edge AI devices, the large amount of power provided by one of those machines could certainly make sense as the first step in the Edge AI deployment process.  Using the Nvidia Transfer Learning Toolkit as one example, the workflow consists of leveraging existing publicly available models, adding your own custom classes and dataset, training it on the big hardware such as V100 and T4 GPU’s, and then deploying the output and resulting model onto smaller devices such as Jetsons.  Testing your TLT output on a hosted Jetson, prior to deploying to all the devices out in the field, would be wise.  Thus, a hosted Nano or Xavier that is built into your overall CI/CD pipeline, to QA and ensure functionality prior to deployment to the whole fleet, is a great use case.

A second very important use-case for hosted Jetson devices is for learning, experimentation, and development purposes.  As we can see above, the cheapest EC2 instance is about $375 per month.  If you are just get started learning how to build and develop AI applications, exploring the DeepStream or CUDA ecosystem, or looking to work through various online learning courses on the topic, $375 probably doesn’t make much sense…but a smaller AI Server could be just right for the task.  Hosted Jetson Nano’s can absolutely be used to learn the fundamentals of computer vision, deep learning, neural networks, and other machine learning topics.  They are also good for learning about containers and pulling down images from the Nvidia NGC Catalog, such as PyTorch, TensorFlow, Rapids, and more.  Those types of smaller tasks, Hello AI World projects, and basic Getting Started projects are far more cost effective to run on Jetsons than on the big GPU servers costing 10x (at a minimum).

Posted on Leave a comment

AI Server Workflow, Nvidia Transfer Learning Toolkit Example

Continuing our series on Edge AI Servers and the rapid transition underway as developers migrate their workloads from traditional datacenters / clouds to smaller, distributed workloads running closer to users, let’s investigate a specific Nvidia AI Server workflow where a hosted Jetson Nano or Jetson Xavier NX could make sense.

At GTC in May 2021, Nvidia launched the Transfer Learning Toolkit (TLT) 3.0, which is designed to help users build customized AI models quickly and easily.  The process is rather straightforward:  Instead of creating and training a model from scratch, which is very time consuming, you instead take a pre-trained model such as PeopleNet, FaceDetectIR, ResNet, MobileNet, etc, add in your custom object (image for vision applications, sound for audio or language models), re-train with the added content, and then you can leverage the resulting output model for inferencing on smaller, edge devices.

The Transfer Learning Toolkit is part of the larger TAO (Train, Adapt, Optimize) platform, and is intended to be run on big hardware.  Their requirements state:

  • 32 GB system RAM
  • 32 GB of GPU RAM
  • 8 core CPU
  • 1 NVIDIA GPU
  • 100 GB of SSD space
  • TLT is supported on A100, V100 and RTX 30×0 GPUs. 

However, the end result is a model that can run on a much smaller device like a Jetson Nano, TX2, or Xavier NX.

Looking closer at the TLT Quick Start documentation, installation begins with setting up your workstation, or launching a cloud VM like an AWS EC2 P3 or G4 instance which have Volta or Turing GPUs.  There are a series of prerequisites to install, a TLT container that is downloaded from the Nvidia Container Registry, and once your python environment is setup, you launch a Jupiter notebook that will help guide you through the rest.  

There is also a sample Computer Vision Jupyter notebook that can get you up and running quickly, located here:  https://ngc.nvidia.com/catalog/resources/nvidia:tlt_cv_samples

Once you have the Transfer Learning Toolkit workflow established, you can begin testing the output and resulting models on Jetson devices.  This is where a hosted Jetson Nano functioning as an AI Server might make sense, as you could simply some automation or CI/CD workflow for training with TLT and testing the results on the Jetson.  Then, if everything passes, results are highly accurate, detection, segmentation, etc are all performing well, then you could deploy to your production devices in the field.

This is of course only one example of the value of hosted AI servers, and we’ll continue looking at more use cases in the near future! 

Posted on Leave a comment

The Edge AI Server Revolution (Driven by Arm, Of Course)

The past 2 years have seen rapid growth in experimentation and ultimately deployment and adoption of AI/ML at the Edge. This has been fueled by dramatic increases in on-device AI processing capability, and equally dramatic reduction in size and power requirements of devices. The Nvidia Jetson Nano with its GPU and CUDA cores, Google Coral Dev Board containing a TPU for Tensorflow acceleration, and even Microcontrollers running TinyML have quickly gained widespread adoption among developers. These devices are cheap, accurate, and readily available, allowing developers to deploy AI/ML workloads to places that were not practical just a short time ago.

hosted-nvidia-jetson-nano

This allows developers to re-think their applications, and begin to migrate AI workloads out of the datacenter, which was the only place to run their AI/ML tasks previously, potentially saving money or improving performance my moving processing closer to where it is needed. This also allows for net-new capability, adding computer vision, object detection, pose estimation, etc, in places that previously were not possible.

In order to help prepare developers and allow them to experiment and build their skills, miniNodes is making available some Edge AI inspired Arm Servers, starting with the Nvidia Jetson Nano. These nodes are intended to be used by engineers and teams just getting started on their Edge AI journey, who are testing their applications and deep learning algorithms. Another use for an Edge AI Arm Server is for light-duty AI processing, where it doesn’t make financial sense to rent big AI servers from the likes of AWS or Azure, and instead a smaller device will work just fine. Finally, developers and teams that do AI training that is not time-sensitive, or relatively small, can achieve significant savings by using a hosted Jetson Nano for their model training, instead of local GPU’s or AWS resources.

Whether you are just getting starting and beginning to explore Edge AI, or have been following the trend and already have Edge AI projects underway, a miniNodes hosted Jetson Nano is a great way to gain hosted AI processing capability or reduce AWS and Azure cloud AI costs.

Posted on Leave a comment

Arm Server Update, Spring/Summer 2021

As usual, we are overdue for an update on all things Arm Servers! Today’s announcement of the Arm v9 specification is a great time to review the state of Arm Servers, and what has changed since our last update.

First, let’s review our last update. Marvell canceled the ThunderX3 product, Ampere had announced the Altra but it wasn’t shipping, AWS Graviton was available, and Nuvia was designing a processor.

Fast forward to today, and the Ampere Altra’s are now becoming available, with limited stock via the Works on Arm program at Equinix Metal, and some designs shown off by Avantek, a channel supplier. Mt. Snow and Mt. Jade, as they are known, are also formally designated as “ServerReady” parts, passing standards compliance tests.

Nuvia, the startup that was designing a new Arm Server SoC from the ground up, was purchased by Qualcomm, in an apparent re-entry into the Arm Server market (or for use in Windows on Arm laptops?). Don’t forget, they previously had an Arm Server part, the Centriq, though they scrapped it a few years ago. So, it now remains to be seen if Nuvia will launch a server-grade SoC, or pivot to a smaller target-device.

The other emerging trend to cover is the role of Arm in the Edge Server ecosystem, where the trend of pushing small servers out of the datacenter and closer to customers and users is rapidly gaining momentum. In this scenario, non-traditional, smaller devices take on the role of a server, and the energy efficiency, small form-factor, and varied capabilities of Arm-powered single board computers are taking on workloads previously handled by typical 1U and 2U rackmount servers in a datacenter. But, small devices like the Nvidia Jetson AGX, RaspberryPi Compute Module 4, and NXP Freeway boxes are able to perform Edge AI, data caching, or local workloads, and only send what is necessary up to the cloud. This trend has been accelerating over the past 12 – 18 months, so, we may see some more niche devices or SoC’s start to fill this market.

Posted on 17 Comments

How to Run Rosetta@Home on Arm-Powered Devices

How to Run Rosetta@Home on Arm-Powered Devices

This week, after an amazing Arm community effort, the Rosetta@Home project released support for sending work units to 64-bit Arm devices, such as the Raspberry Pi 4, Nvidia Jetson Nano, Rockchip RK3399-based single board computers, and other SBC’s that have 2gb of memory or more.

Sahaj Sarup from Linaro, the Neocortix team, Arm, and the Baker Lab at the University of Washington all played in role helping us port the Rosetta software to aarch64, get it tested in their Ralph (Rosetta ALPHa) staging environment, validate the scientific results, and eventually push it to Rosetta@Home.

Now, anyone with spare compute capacity on their Arm-powered SBC’s running a 64-bit OS can help contribute to the project by running BOINC, and crunch data and perform protein folding calculations that help doctors target the COVID-19 spike proteins (among other medicine and scientific workloads).

Here is a quick tutorial on how to get started, using a native operating system for your devices.  This methodology is not the only way to run Rosetta@Home, but, is intended for the technical users who want to run their own OS and manage the system themselves.

Raspberry Pi 4

To fight Covid-19 using a Raspberry Pi 4, you need a Raspberry Pi 4 with 2gb or 4gb of RAM.  The Rosetta work units are large scientific calculations, and they require 1.9gb of memory to run.  You will need to use a 64-bit OS for this, so Raspbian will not work, as it is a 32-bit OS.  Instead, you will need to download and flash Ubuntu Server from their official sources, located here:  https://ubuntu.com/download/raspberry-pi.  Once the SD Card is written, and your Pi 4 has booted up, connect an ethernet cable, and be sure to run ‘sudo apt-get update && sudo apt-get upgrade’ to make sure the system is up to date.  At this point a reboot may be necessary, and once the system comes back up, we can start to install BOINC and Rosetta.  Run ‘sudo apt-get install boinc-client boinctui’ to bring in the BOINC packages.  If you are using a 2gb RAM version of the Pi 4, we need to override one setting to cross that 1.9gb threshold mentioned earlier.  If you have a 4gb RAM version of the Pi 4, you can skip this next item.  But, 2gb users, you will need to type ‘sudo nano /var/lib/boinc-client/global_prefs_override.xml’ and enter the following to increase the default memory available to Rosetta to the maximum amount of memory on the board:

<global_preferences>
   <ram_max_used_busy_pct>100.000000</ram_max_used_busy_pct>
   <ram_max_used_idle_pct>100.000000</ram_max_used_idle_pct>
   <cpu_usage_limit>100.000000</cpu_usage_limit>
</global_preferences>

 Press “Control-o” on the keyboard to save the file, and then press Enter to keep the file name the same.  Next, press “Control-x” to quit nano.

Next, using your desktop or laptop PC, head to http://boinc.bakerlab.org and create an account, and while there, be sure to join the “crunch-on-arm” team!  

Back on the Raspberry Pi, we can now run ‘boinctui’ from the command prompt, and a terminal GUI will load.  Press F9 on the keyboard, to bring down the menu choices.  Navigate to the right, to Projects.  Make sure Add Project is highlighted, and press Enter.  You will see the list of available projects to choose from, choose Rosetta, select “Existing User” and enter the credentials you created on the website a moment ago.  

It will take a moment, but, Rosetta will begin downloading the necessary files and then download some work units, and begin crunching data on your Raspberry Pi 4!

You can press ‘Q’ to quit boinctui and it will continue crunching in the background.

 

Nvidia Jetson

If you have an Nvidia Jetson Nano, you can actually follow the same directions outlined above directly on the Nvidia-provided version of Ubuntu.  To recap, these are the steps:

  • Open a Terminal, and run ‘sudo apt-get update && sudo apt-get upgrade’.  After that is complete, reboot.
  • Using your desktop or laptop PC, head to http://boinc.bakerlab.org and create an account, and join the “crunch-on-arm” team
  • Back on the Jetson Nano, run ‘sudo apt-get install boinc-client boinctui’
  • Run ‘boinctui’, press F9, navigate to Projects, Add Project, and choose Rosetta@Home.  Choose an Existing Account, enter your credentials, and wait for some work units to arrive!

 

Other Boards

If you have other single board computers that are 64-bit, and have 2gb of RAM, that run Armbian, the process is the same for those devices as well!  Examples of boards that could work include Rockchip RK3399 boards like the NanoPi M4 or T4, OrangePi 4, or RockPro64, Allwinner H5 boards like the Libre Computer Tritium H5 or NanoPi K1 Plus, or AmLogic boards like the Odroid C2, Odroid N2, or Libre Computer Le Potato.  Additionally, 96Boards offers high performance boards such as the HiKey960 and HiKey970, Qualcomm RB3, or Rock960 that all have excellent 64-bit Debian-based operating systems available.

For any of those, simply install the ‘boinc-client’ and ‘boinctui’ packages, and add the Rosetta project!

Of course, if you just so happen to have a spare Ampere eMAG, Marvell ThunderX or ThunderX2 laying around, those would work quite nicely as well.