Posted on

Recap: Building an Arm-Powered IoT, Edge, and Cloud Infrastructure

Intro

At Arm’s annual TechCon event in San Jose, Arm CEO Simon Segars presented a vision of the future where a trillion connected devices interact seamlessly with each other and pass data between the Cloud, the Edge, and the Internet of Things, at a scale unimaginable even just a few years ago. Self driving cars will generate massive amounts of sensor information and data, 5G wireless will enable increased connection speeds and reduced latency, and artificial intelligence will provide scientific breakthroughs in materials, technologies, medicines, and energy. This vision of the future state of the connected world is something we have heard about for several years now, with countless written articles, interviews, social media posts, conference talks, and various other forms of media addressing the topic.

However, when seeking out real-world examples of this architecture in practice to help learn and understand how the bits and pieces work together, we came up empty. There were no purpose-built sample projects, pre-written code examples, or other working prototypes of these principles available. Surely there are some internal, private teams building out this type of infrastructure for specific use-cases and organizational needs, but there were no public / open projects to learn from.

Thus, it was time to take action, and build a prototype infrastructure ourselves! With the help of the Arm Innovator Program, we set out on a journey to develop a proof-of-concept that encapsulates as many of these concepts as possible, leveraging currently available technologies and showcasing Arm’s diverse portfolio of products and ecosystems. With help from the Works on Arm program via Packet.com, we began brainstorming.  Our goal was to deploy IoT endpoints to a handful of locations around the world, and capture environmental data via sensors on those devices. From there, we wanted to feed that data to a local Edge Server, which would be responsible for translating the data to a usable format and sending it further upstream, to a Cloud Server functioning as a data warehouse and visualization platform.

In this article we’ll take an in-depth look at the project, and detail the key technologies to give a better idea of what this kind of system entails. I’ll also provide a summary of our lessons learned, which hopefully help you to build and iterate faster, and avoid some potential pitfalls along the way.

Design

When thinking about the design of this project, we wanted to keep things simple, as the purpose of this exercise it to demonstrate capability and build a proof-of-concept, but not an actual product shipped to real, paying customers.  Thus, we made hardware and software selections based on cost and availability, as opposed to “most appropriate” for the intended use. We also knew we would have relatively small data-sets, and reliable power and internet connectivity for all of our devices.  Your real-world IoT deployments may not have these luxuries, so, your hardware and software selections may not be as straightforward as ours were.  Many IoT projects have to be tolerant of lost network connectivity, unreliable power delivery, or harsh environmental conditions.  But we were fortunate to have consistent power and internet.  Let’s go through our inventory of Arm-powered hardware and software, keeping in mind the rather ideal conditions we’ve got:

1. IoT Endpoints

Hardware

  • Raspberry Pi 3B+
  • Sparkfun Qwiic HAT
  • Sparkfun Lightning Detector
  • Sparkfun Environmental Combo Sensor
  • Sparkfun GPS Sensor

Software

  • Arm Mbed Linux OS
  • Arm Pelion Device Management

 

2. Edge Nodes

Hardware

  • Linaro / 96Boards HiKey, and HiKey 960

Software

  • Linaro Debian Linux

 

3. Cloud Server

Hardware

  • Ampere eMAG, hosted by Packet.com

Software

  • Debian Linux
  • InfluxDB
  • Grafana

 

As you can see, we have made some selections that fit our small project well, but as mentioned may not be suitable for all IoT use cases depending on your project’s environmental conditions.  However, let’s start detailing the items, beginning with the IoT Endpoint.  We’re using a Raspberry Pi 3B, a Sparkfun Qwiic HAT, and Sparkfun sensors to capture Temperature, Humidity, Barometric Pressure, CO2, and tVOC (volatile organic compounds).  We have lightning detection capability (currently not being used, but, available) as well, and GPS so that we can determine precisely where the Endpoint is located.  As for software, because these devices are out in the wild, scattered literally across the globe, we needed a framework to allow remote monitoring, updating, and application deployment.  Arm Mbed Linux OS is a lightweight, secure, container-based operating system that meets these requirements.  It is currently still in Technical Preview, but is far enough along in development that it meets our project needs and is working great.  A total of 10 Raspberry Pi Endpoints were built and sent around the globe, with several across the United States, as well as Cambridge, Budapest, Delhi, Southern India, and one spare unit left over for local testing.

Turning to our Edge Nodes, these are the simplest component in our project’s infrastructure. These are 96Boards devices, chosen for their support and ease-of-use.  Linaro and the 96Boards team do an excellent job of building ready-made Debian images with updated kernels, applications, and drivers for their hardware, making for a great out-of-the-box experience. Two of these devices are currently provisioned, one in India and one in the United States, each serving their geographic region. The devices aggregate the IoT Endpoint data stream, convert it to the format needed by the Cloud Server, and publish the data to the Cloud.

Finally, the Arm-powered Cloud Server is an Ampere eMAG server, hosted by Packet.com. It is an enterprise-grade machine, and functions as the data warehouse for all of the IoT data, as well as a visual platform for charting and viewing the data in a time-series fashion thanks to InfluxDB and Grafana. Packet.com has datacenters around the world, and their native tooling and user interface make deploying Arm Servers quick and easy.

Now that the system architecture has been described, let’s take a look at the application architecture, and start to dissect how data flows from the IoT Endpoints, to the Edge, to the Cloud. As mentioned, Mbed Linux OS is a container-based OS, which is to say that it is a minimal underlying operating system based on Yocto, providing a small, lightweight, secure foundation to which the Open Container Initiative (OCI) “RunC” container engine is added.  RunC can launch OCI compliant containers built locally on your laptop, then pushed to the Endpoint via the Mbed Linux tooling, no matter where the device is located.  In our particular case, we chose a small Alpine Linux container, added Python, added the Sparkfun libraries for the sensors, and created a small startup script to begin reading data from the sensors when the container starts.  The container also has an MQTT broker in it, which is responsible for taking that sensor data, turning it into a small JSON snippet, and publishing it to a specific known location (the Edge Server).

The Edge Servers are a more traditional Debian operating system, with Python installed as well.  There is a Python script running as a daemon that captures and parses the incoming MQTT from IoT Endpoints, converts it to an InfluxDB formatted query, and publishes it to the specified Influx database, which is running on the Ampere eMAG Cloud Server.

Finally, the Cloud Server is an enterprise-grade Ampere eMAG Arm Server.  It is graciously hosted by the Works on Arm project at Packet.com, in their New Jersey datacenter. This server is also running Debian, and has InfluxDB and Grafana installed for storage and visualization of the data being sent to it from the Edge Nodes.  Thus, our IoT, Edge, and Server are all Arm-powered!

Construction Challenges

Building a container to hold our application did prove more challenging then anticipated, as a result of some needed functionality not provided by the ready-made Mbed Linux downloads. Normally, this could be easily solved by adding the desired packages to the Yocto build scripts and rebuilding from source…however, there is one additional and very unique quirk to this project: We have decided to exclusively use Arm-powered Windows on Snapdragon laptops to build the project!  These laptops are highly efficient, with all-day battery life and far better performance than previous generations offered.  One limitation however, is they are currently unable to run Docker, which we would need to re-build Mbed Linux from source.  Thus, instead of adding the necessary packages to Yocto and recompiling, we instead had to manually port Device Tree functionality, gain access to the GPIO pins on the Pi, enable the I2C bus and tooling, and finally expose that functionality from the host OS to the container, all by way of manually lifting from Raspbian.  Obviously, we placed this limitation upon ourselves, but it does demonstrate that there are still a few shortcomings in the developer experience on Arm.

A second valuable lesson learned is with the native Mbed tooling for initially deploying devices.  Provisioning and updating devices with Pelion Device Management is a straightforward process, except for one small but critical hiccup we experienced.  It is worth noting here again that Mbed Linux OS is in a Technical Preview status, and the feedback we were able to give to the Mbed team as a result of this process has been incorporated and will make the final product even better!  However, when following the documentation to provision devices for the first time, a Developer Certificate is issued. That certificate is only valid for 90 days, and after that time you can no longer push containers to a device in the field. That Certificate can certainly be updated via the re-provisioning process, but, you must be on the same network as the device in order to perform that action. Our devices are already out in the field, so that is not possible at this point.  Thus, we have a fleet of devices that cannot receive their intended application.  On the plus side, this exercise proved it’s worth by highlighting this point of failure, and resulted in the valuable documentation update so that your project can be a success!

Conclusion

In the end, we were able to successfully provision just a few devices that we still had local access to, and prove that the theory was sound and demonstrate a functional prototype at Arm TechCon!

Using a pair of freshly provisioned Raspberry Pi’s, the containerized application was pushed Over The Air to them, via the Mbed CLI.  Pelion showed the devices as Online, and the device and application logs in the Dashboard reported the container was started successfully.  Sure enough, on the Edge Node, data began streaming in, and the MQTT Broker began taking those transmissions, translating them to Influx, and sending them upstream to the Cloud Server.  Logged into Grafana running on the Cloud Server, that data could then be inspected and visualized.

Thus, while it wasn’t quite as geographically diverse as hoped, we did actually accomplish what we set out to do, which was build an end-to-end IoT, Edge, and Cloud infrastructure running entirely on Arm!  The data that is flowing is certainly just a minimal example, but as a proof-of-concept we can truthfully say that the design is valid and the system works!  Now, we’re excited to see what you can build to bring Simon Segar’s vision of the connected future to life!

Posted on

Arm Server Update, Summer 2019

It has been a while since our last Arm Server update, and as usual there has been a lot of changes, forward progress, and new developments in the ecosystem!

The enterprise Arm Server hardware is now mostly consolidated around the Cavium ThunderX2 and Ampere eMag products, available from Gigabyte, Avantek and Phoenics Electronics. Each can be purchased in 1U, 2U, and 4U configurations ready for the datacenter, and high performance developer workstations based on the same hardware are available, as well. Both of these solutions can be customized with additional RAM, storage, and networking, to best fit the intended workload.

Another option that exists, but is difficult to obtain in the United States, is the Huawei 1620, also known as the Kunpeng 920. These servers are also enterprise grade servers ready to be installed in a datacenter environment, typically in a 2U chassis with configurable memory and storage options. However, availability outside of Asia is limited, and new regulations may make importing them difficult.

While the Cavium, Ampere, and (potentially) Huawei servers are available as bare-metal options shipped directly to you for installation in your own datacenter, Amazon has also made significant progress over the past few months and is rapidly becoming the most popular Arm Server provider. They use their own Arm Server CPU called the Gravitron, that they use in their own proprietary AWS A1 ECS instances. This is quickly becoming the best way to deploy Arm Servers, as the entire system is in the Cloud and no hardware has to be purchased. They come in various sizes and price ranges, and experienced developers organizations who are familiar with the AWS system can simply pay by the hour for temporary workloads. For users who are less familiar with the ECS dashboard, less comfortable with the fluctuations in billing model, or prefer a fixed rate, we at miniNodes offer pre-configured Arm VPS servers in a range of sizes and prices, hosted atop the AWS platform.

Finally, the Edge of the network continues to be where a lot of innovation is occurring, and Arm Servers are a perfect fit for deplopyment as Edge Servers, due to their low power consumption, cost-effectiveness, and wide range of size and formats. The MacchiattoBin has been demonstrated running workloads in the base of windmills, the new SolidRun Clearfog ITX is promising to be a flexible solution, and the new Odroid N2 is an intersting device that has “enough” performance to satisfy a wide range of workloads that don’t need to always rely on the Cloud, and can instead deliver services and data to end-users (or other devices) faster by being located in closer proximity to where compute is needed.

As always, check back regularly for updates and Arm Server news, or follow us on Twitter where we share Arm related news on a daily basis!

Posted on

ArmTechCon Recap

As you may have seen here and here, miniNodes recently got invited to participate at ArmTechCon, inside Arm’s own “Innovation Pavilion” in the Expo Hall.  Because our core business of hosting tiny Arm Servers isn’t that exciting to show off, especially at the biggest Arm ecosystem event of the year, we partnered with Robert Wolff and the awesome team at 96Boards to come up with something a bit more intriguing.   🙂

After some back and forth, we landed on a solar powered, connected, mobile developer and edge computing platform. The idea was to build a self-contained and self-powered box that could be taken out and used in geographically isolated areas, that could still have connectivity back to a central cloud provider. The actual use cases could vary dramatically, but the common theme is that there is a lack of infrastructure, electricity, or wifi in the targeted region. The box would be powered by solar panels for this iteration, but could also accept other renewable sources such as wind, hydroelectric via a waterwheel or impeller, geothermal, or more.

So, as one potential use case, we envisioned using the box in remote villages or locales that don’t have the typical infrastructure needed to teach development, AI, machine learning, edge computing, remote code or container deployment, or other advanced computer science topics.

The end goal is to provide everything as open source, with a Bill of Materials and instructions for anyone to replicate the build, using readily available, off-the-shelf parts with no customization necessary. For the demo unit though, the project hasn’t made it quite that far yet.  For this prototype, the box consisted of a foldable solar panel array, that was hooked up to a charge controller, which then fed a battery pack. The battery pack was run over to an inverter, so that we could power multiple standard devices. The first device to be powered was a 96Boards Dragonboard, that had a small LCD attached for graphical output, and had a 4G LTE cellular mezzanine which provided data to the Dragonboard.  This, as long as there is cell service, the Dragonboard has connectivity to the internet!  At that point, we had effectively built a solar powered, self sustaining compute workstation that could connect to the internet nearly anywhere!

However, because we were just doing a proof of concept, we thought it would be fun to go even one step further!  Next, we setup sharing on the Dragonboard’s cellular connection, and ran an ethernet cable out from the Dragonboard over to a Raspberry Pi 3 Compute Module.  This Pi was running a service from Microsoft called Azure IoT Edge, which is a product that allows you to remotely push containers and code to an IoT device, or receive data and telemetry back from a device out in the wild.  Thus, as long as there is adequate sunlight (or another renewable source of power) and cell coverage, the box can be remotely monitored and even updated from anywhere.  Or, thanks to its LCD and USB keyboard, it can be used as a workstation in places where infrastructure is lacking.

Another potential use case for the platform could be as an environmental monitoring solution. When equipped with a gyroscope, the box could detect movements from events such as a rock slide, avalanche, mud slide, volcanic activity, etc. Any anomoly can be reported back to the central servers immediately for analysis.

When equipped with a camera, the box could also visually monitor the environment, and detect changes in imagery such as a smoke plume for early forest fire detection, wildlife movement, vehicles approaching locations where there should not be any, or more.

Finally, because of the device’s Raspberry Pi Compute Module carrier board, the box has the ability to run targeted workloads of its own, for extreme edge computing. The workloads can be updated, changed, and monitored remotely, again due to the Dragonboard’s cellular connectivity to the Microsoft Azure IoT Edge platform.

ArmTechCon was a big success, and it’s incredible what can be built using Arm technology.  Be sure to check back for status updates as the solar compute box undergoes future development and iterations!