Posted on Leave a comment

Installing AlmaLinux 8 on Arm, using a Raspberry Pi 4

A few weeks ago, we took a look at how to install the new Rocky Linux 8 on Arm, using a Raspberry Pi, as a replacement for CentOS.  This is due to Red Hat altering the release strategy for CentOS, transitioning from a stable methodology to more early and rapid development.  However, there is also a second community build aiming to fill the gap left by Red Hat, so today we will look at the process of installing the new AlmaLinux 8, again on the Raspberry Pi 4 with community-built UEFI firmware.

Like Rocky Linux, this new AlmaLinux is a Linux distribution put together by the community, in order to replace the stable, predictable manner in which packages are updated.  AlmaLinux comes in both x86 and aarch64 builds of the OS, and we’ll be using the aarch64 build of course for our Pi.

We’re going to replicate the previous how-to for the most part, so let’s recap the hardware we’ll use:

  • Raspberry Pi 4B
  • SD Card
  • USB stick for install media
  • USB Stick or USB-to-SSD adapter for destination (permanent storage) media

To get started, we are going to download and flash the community-built UEFI firmware for the Raspberry Pi to an SD Card.  This UEFI implementation is closer in nature to a “normal” PC UEFI BIOS, and will cause the Pi to boot a bit more standard than would be achieved with the Raspberry Pi OS method.  The UEFI firmware is placed directly on an SD Card, and when the Pi is powered on it will read the UEFI firmware and can then boot from a USB stick or over the network.  To install the UEFI firmware, download the latest release .zip file (RPi4_UEFI_Firmware_v1.28.zip at the time of this writing) from https://github.com/pftf/RPi4/releases

Next, unzip this .zip file you just downloaded, and copy the contents to an SD Card.  The card needs to be formatted as FAT32, so if you are re-purposing an SD Card that had Linux on it previously you might need to delete partitions and re-create a FAT32 partition on the SD Card.  Once the files are copied to the SD Card, it will look like this:

With the SD Card complete, we can now proceed to download AlmaLinux.  Browse to https://almalinux.org and click on Download.  You will have the option for x86 or aarch64 downloads, obviously we’ll want the Arm64 version so click on that link, and then choose a mirror close to you.  Once you are taken to the mirror’s repository, you’ll see you have Boot, Minimal, or DVD .iso files to choose from.  For this tutorial, we’ll go with minimal, so click on that one and your download will begin.  Once the download is complete, flash the file to a USB stick using Rufus, Etcher, WinDisk32, or any other method you prefer.

Now that we have our SD card for booting and USB stick for installing, we just determine what to use for destination storage.  As the Pi doesn’t have any onboard eMMC, and the SD Card slot is occupied by our firmware, we could use another, separate USB drive, network attached storage, or for this tutorial we’ll actually go with a USB-to-SSD adapter, which will allow us to hook up a 2.5 inch SATA SSD as our permanent storage.

Plug the SSD into the adapter, and then connect the USB plug into one of the USB 3.0 (blue) ports on the Pi.  Attach a keyboard, mouse, and monitor, insert the SD Card, and the USB Stick with AlmaLinux on it, then plug in power.  After a moment you will see a Raspberry Pi logo, and the Pi will boot from the USB stick.  The AlmaLinux installation process will begin, and if you are familiar with the CentOS installation process you will notice it’s nearly identical, since the upstream sources are the same.

AlmaLinux-8-Install

The Raspberry Pi is not as fast as a PC, or a large Arm Server, so you’ll need to be patient while the installation wizard loads and navigating the menus can be a bit slow.  However, you will be able to setup a user account, choose your timezone, and select the destination drive to install to (the SSD).  Once satisfied, you can begin the installation, and again you’ll need to be patient while the files are copied to the SSD.  Make some coffee or tea. 

AlmaLinux-Install-Complete

Once the process does complete, you can reboot the Pi, remove the USB stick so you don’t start the whole process over, and eventually boot into your new AlmaLinux 8.4 for Arm distro!

AlmaLinux-8-Login

Posted on 17 Comments

How to Run Rosetta@Home on Arm-Powered Devices

How to Run Rosetta@Home on Arm-Powered Devices

This week, after an amazing Arm community effort, the Rosetta@Home project released support for sending work units to 64-bit Arm devices, such as the Raspberry Pi 4, Nvidia Jetson Nano, Rockchip RK3399-based single board computers, and other SBC’s that have 2gb of memory or more.

Sahaj Sarup from Linaro, the Neocortix team, Arm, and the Baker Lab at the University of Washington all played in role helping us port the Rosetta software to aarch64, get it tested in their Ralph (Rosetta ALPHa) staging environment, validate the scientific results, and eventually push it to Rosetta@Home.

Now, anyone with spare compute capacity on their Arm-powered SBC’s running a 64-bit OS can help contribute to the project by running BOINC, and crunch data and perform protein folding calculations that help doctors target the COVID-19 spike proteins (among other medicine and scientific workloads).

Here is a quick tutorial on how to get started, using a native operating system for your devices.  This methodology is not the only way to run Rosetta@Home, but, is intended for the technical users who want to run their own OS and manage the system themselves.

Raspberry Pi 4

To fight Covid-19 using a Raspberry Pi 4, you need a Raspberry Pi 4 with 2gb or 4gb of RAM.  The Rosetta work units are large scientific calculations, and they require 1.9gb of memory to run.  You will need to use a 64-bit OS for this, so Raspbian will not work, as it is a 32-bit OS.  Instead, you will need to download and flash Ubuntu Server from their official sources, located here:  https://ubuntu.com/download/raspberry-pi.  Once the SD Card is written, and your Pi 4 has booted up, connect an ethernet cable, and be sure to run ‘sudo apt-get update && sudo apt-get upgrade’ to make sure the system is up to date.  At this point a reboot may be necessary, and once the system comes back up, we can start to install BOINC and Rosetta.  Run ‘sudo apt-get install boinc-client boinctui’ to bring in the BOINC packages.  If you are using a 2gb RAM version of the Pi 4, we need to override one setting to cross that 1.9gb threshold mentioned earlier.  If you have a 4gb RAM version of the Pi 4, you can skip this next item.  But, 2gb users, you will need to type ‘sudo nano /var/lib/boinc-client/global_prefs_override.xml’ and enter the following to increase the default memory available to Rosetta to the maximum amount of memory on the board:

<global_preferences>
   <ram_max_used_busy_pct>100.000000</ram_max_used_busy_pct>
   <ram_max_used_idle_pct>100.000000</ram_max_used_idle_pct>
   <cpu_usage_limit>100.000000</cpu_usage_limit>
</global_preferences>

 Press “Control-o” on the keyboard to save the file, and then press Enter to keep the file name the same.  Next, press “Control-x” to quit nano.

Next, using your desktop or laptop PC, head to http://boinc.bakerlab.org and create an account, and while there, be sure to join the “crunch-on-arm” team!  

Back on the Raspberry Pi, we can now run ‘boinctui’ from the command prompt, and a terminal GUI will load.  Press F9 on the keyboard, to bring down the menu choices.  Navigate to the right, to Projects.  Make sure Add Project is highlighted, and press Enter.  You will see the list of available projects to choose from, choose Rosetta, select “Existing User” and enter the credentials you created on the website a moment ago.  

It will take a moment, but, Rosetta will begin downloading the necessary files and then download some work units, and begin crunching data on your Raspberry Pi 4!

You can press ‘Q’ to quit boinctui and it will continue crunching in the background.

 

Nvidia Jetson

If you have an Nvidia Jetson Nano, you can actually follow the same directions outlined above directly on the Nvidia-provided version of Ubuntu.  To recap, these are the steps:

  • Open a Terminal, and run ‘sudo apt-get update && sudo apt-get upgrade’.  After that is complete, reboot.
  • Using your desktop or laptop PC, head to http://boinc.bakerlab.org and create an account, and join the “crunch-on-arm” team
  • Back on the Jetson Nano, run ‘sudo apt-get install boinc-client boinctui’
  • Run ‘boinctui’, press F9, navigate to Projects, Add Project, and choose Rosetta@Home.  Choose an Existing Account, enter your credentials, and wait for some work units to arrive!

 

Other Boards

If you have other single board computers that are 64-bit, and have 2gb of RAM, that run Armbian, the process is the same for those devices as well!  Examples of boards that could work include Rockchip RK3399 boards like the NanoPi M4 or T4, OrangePi 4, or RockPro64, Allwinner H5 boards like the Libre Computer Tritium H5 or NanoPi K1 Plus, or AmLogic boards like the Odroid C2, Odroid N2, or Libre Computer Le Potato.  Additionally, 96Boards offers high performance boards such as the HiKey960 and HiKey970, Qualcomm RB3, or Rock960 that all have excellent 64-bit Debian-based operating systems available.

For any of those, simply install the ‘boinc-client’ and ‘boinctui’ packages, and add the Rosetta project!

Of course, if you just so happen to have a spare Ampere eMAG, Marvell ThunderX or ThunderX2 laying around, those would work quite nicely as well.

Posted on Leave a comment

Running AI Workloads on Arm Servers

Arm’s Role in Processing AI Workloads

The past several years have seen enormous gains in Artificial Intelligence, Machine Learning, Deep Learning, Autonomous Decision Making, and more.  The availability of powerful GPUs and FPGAs both on-premise and in the cloud for several years now have certainly helped, but more and more of this AI processing is actually being done at the Edge, in small devices.  The popularity of Amazon Alexa, Google Home, and AI-enabled features in smartphones such as Apple’s Siri has skyrocketed over the past few years.  The various frameworks and models such as Tensorflow, PyTorch, Caffe, and others have matured, and newer, lightweight versions have come along such as TinyML, TensorflowLite, and other libraries designed to allow machine learning in the smallest devices possible.  Local processing of audio and detecting specific sounds via wavelength pattern matching, object recognition in a camera’s frame, motions and gestures being monitored and observed, and vehicle safety systems that detect and respond immediately to changing conditions with no human intervention are some of the most common applications.

The work that it takes to develop these AI models is very specialized, but ultimately algorithms are created, a large sample of training data is fed in to the system, and a model is developed that has a confidence factor and accuracy value.  Once the model is deployed, real-time stream processing occurs, and actions can be taken based upon the results of data flowing through the application.  In the case of a computer vision application for example, identifying certain objects  can result in alerts (hospital staff notified), corrective actions (apply the brakes immediately), or data stored for later use.
As mentioned, more and more AI/ML is actually being processed at the Edge, on small form factor devices.  And, small form factor devices tend to be powered by Arm SoCs, as opposed to the more power hungry x86 designs commonplace in laptops and desktops.  Home devices like Alexa, Google Home, and nearly all smartphones are based on Arm SoCs.  Thus, AI models need to be created, tested, and compatibility verified for Arm powered devices.  Even if an algorithm is developed and trained on a big GPU or FPGA, the resulting model should still be tested on Arm SoC’s to ensure proper functionality.  In order to help speed the testing process, miniNodes now offers hosted Arm microservers with dedicated AI accelerators, that can assist with offloading AI tasks from the CPU and offer excellent machine learning performance.  Testing of self-driving vehicle object detection, navigation and guidance, and actions / behavior models, image classification and object recognition from cameras and video streams, convolutional neural networks, and matrix multiplication workloads, robotics, weather simulation, and many types of deep learning activities can be quickly and easily processed.

Arm Lowers the Cost of AI Processing

AI training and inference in the cloud running on Arm microservers at miniNodes also offers a distinct cost advantage over Amazon AWS, Microsoft Azure, or Google GCE.   Those services can very quickly cost thousands of tens of thousands of dollars per month, but many AI workloads can get by just fine with more modest hardware when paired with a dedicated AI accelerator like a Google Coral TPU, Intel Movidius NPU, or Gyrfalcon Matrix Processing Engine.  AWS, Azure, and GCE provide great AI performance, sure, but you also pay heavily for the processor, memory, storage, and other components of the overall system.  If you are ready to make use of those immense resources, wonderful.  But if you are just starting out, are just learning AI/ML, are only beginning to test your AI modeling on Arm, or just have a lightweight use case, then going with a smaller underlying platform while retaining the dedicated AI processing capability can make more sense.

miniNodes is still in the process of building out the full product lineup, but in the meantime Gyrfalcon 2801 and 2803 nodes are online and ready, with up to 16.8 TOPs of processing for ResNet, MobileNet, or VGG models.  They are an easy, cost effective way to get started with AI processing on Arm!

Check them out here:  https://www.mininodes.com/product/raspberrypi-4-ai-server/

Posted on Leave a comment

Recap: Building an Arm-Powered IoT, Edge, and Cloud Infrastructure

Intro

At Arm’s annual TechCon event in San Jose, Arm CEO Simon Segars presented a vision of the future where a trillion connected devices interact seamlessly with each other and pass data between the Cloud, the Edge, and the Internet of Things, at a scale unimaginable even just a few years ago. Self driving cars will generate massive amounts of sensor information and data, 5G wireless will enable increased connection speeds and reduced latency, and artificial intelligence will provide scientific breakthroughs in materials, technologies, medicines, and energy. This vision of the future state of the connected world is something we have heard about for several years now, with countless written articles, interviews, social media posts, conference talks, and various other forms of media addressing the topic.

However, when seeking out real-world examples of this architecture in practice to help learn and understand how the bits and pieces work together, we came up empty. There were no purpose-built sample projects, pre-written code examples, or other working prototypes of these principles available. Surely there are some internal, private teams building out this type of infrastructure for specific use-cases and organizational needs, but there were no public / open projects to learn from.

Thus, it was time to take action, and build a prototype infrastructure ourselves! With the help of the Arm Innovator Program, we set out on a journey to develop a proof-of-concept that encapsulates as many of these concepts as possible, leveraging currently available technologies and showcasing Arm’s diverse portfolio of products and ecosystems. With help from the Works on Arm program via Packet.com, we began brainstorming.  Our goal was to deploy IoT endpoints to a handful of locations around the world, and capture environmental data via sensors on those devices. From there, we wanted to feed that data to a local Edge Server, which would be responsible for translating the data to a usable format and sending it further upstream, to a Cloud Server functioning as a data warehouse and visualization platform.

In this article we’ll take an in-depth look at the project, and detail the key technologies to give a better idea of what this kind of system entails. I’ll also provide a summary of our lessons learned, which hopefully help you to build and iterate faster, and avoid some potential pitfalls along the way.

Design

When thinking about the design of this project, we wanted to keep things simple, as the purpose of this exercise it to demonstrate capability and build a proof-of-concept, but not an actual product shipped to real, paying customers.  Thus, we made hardware and software selections based on cost and availability, as opposed to “most appropriate” for the intended use. We also knew we would have relatively small data-sets, and reliable power and internet connectivity for all of our devices.  Your real-world IoT deployments may not have these luxuries, so, your hardware and software selections may not be as straightforward as ours were.  Many IoT projects have to be tolerant of lost network connectivity, unreliable power delivery, or harsh environmental conditions.  But we were fortunate to have consistent power and internet.  Let’s go through our inventory of Arm-powered hardware and software, keeping in mind the rather ideal conditions we’ve got:

1. IoT Endpoints

Hardware

  • Raspberry Pi 3B+
  • Sparkfun Qwiic HAT
  • Sparkfun Lightning Detector
  • Sparkfun Environmental Combo Sensor
  • Sparkfun GPS Sensor

Software

  • Arm Mbed Linux OS
  • Arm Pelion Device Management

 

2. Edge Nodes

Hardware

  • Linaro / 96Boards HiKey, and HiKey 960

Software

  • Linaro Debian Linux

 

3. Cloud Server

Hardware

  • Ampere eMAG, hosted by Packet.com

Software

  • Debian Linux
  • InfluxDB
  • Grafana

 

As you can see, we have made some selections that fit our small project well, but as mentioned may not be suitable for all IoT use cases depending on your project’s environmental conditions.  However, let’s start detailing the items, beginning with the IoT Endpoint.  We’re using a Raspberry Pi 3B, a Sparkfun Qwiic HAT, and Sparkfun sensors to capture Temperature, Humidity, Barometric Pressure, CO2, and tVOC (volatile organic compounds).  We have lightning detection capability (currently not being used, but, available) as well, and GPS so that we can determine precisely where the Endpoint is located.  As for software, because these devices are out in the wild, scattered literally across the globe, we needed a framework to allow remote monitoring, updating, and application deployment.  Arm Mbed Linux OS is a lightweight, secure, container-based operating system that meets these requirements.  It is currently still in Technical Preview, but is far enough along in development that it meets our project needs and is working great.  A total of 10 Raspberry Pi Endpoints were built and sent around the globe, with several across the United States, as well as Cambridge, Budapest, Delhi, Southern India, and one spare unit left over for local testing.

Turning to our Edge Nodes, these are the simplest component in our project’s infrastructure. These are 96Boards devices, chosen for their support and ease-of-use.  Linaro and the 96Boards team do an excellent job of building ready-made Debian images with updated kernels, applications, and drivers for their hardware, making for a great out-of-the-box experience. Two of these devices are currently provisioned, one in India and one in the United States, each serving their geographic region. The devices aggregate the IoT Endpoint data stream, convert it to the format needed by the Cloud Server, and publish the data to the Cloud.

Finally, the Arm-powered Cloud Server is an Ampere eMAG server, hosted by Packet.com. It is an enterprise-grade machine, and functions as the data warehouse for all of the IoT data, as well as a visual platform for charting and viewing the data in a time-series fashion thanks to InfluxDB and Grafana. Packet.com has datacenters around the world, and their native tooling and user interface make deploying Arm Servers quick and easy.

Now that the system architecture has been described, let’s take a look at the application architecture, and start to dissect how data flows from the IoT Endpoints, to the Edge, to the Cloud. As mentioned, Mbed Linux OS is a container-based OS, which is to say that it is a minimal underlying operating system based on Yocto, providing a small, lightweight, secure foundation to which the Open Container Initiative (OCI) “RunC” container engine is added.  RunC can launch OCI compliant containers built locally on your laptop, then pushed to the Endpoint via the Mbed Linux tooling, no matter where the device is located.  In our particular case, we chose a small Alpine Linux container, added Python, added the Sparkfun libraries for the sensors, and created a small startup script to begin reading data from the sensors when the container starts.  The container also has an MQTT broker in it, which is responsible for taking that sensor data, turning it into a small JSON snippet, and publishing it to a specific known location (the Edge Server).

The Edge Servers are a more traditional Debian operating system, with Python installed as well.  There is a Python script running as a daemon that captures and parses the incoming MQTT from IoT Endpoints, converts it to an InfluxDB formatted query, and publishes it to the specified Influx database, which is running on the Ampere eMAG Cloud Server.

Finally, the Cloud Server is an enterprise-grade Ampere eMAG Arm Server.  It is graciously hosted by the Works on Arm project at Packet.com, in their New Jersey datacenter. This server is also running Debian, and has InfluxDB and Grafana installed for storage and visualization of the data being sent to it from the Edge Nodes.  Thus, our IoT, Edge, and Server are all Arm-powered!

Construction Challenges

Building a container to hold our application did prove more challenging then anticipated, as a result of some needed functionality not provided by the ready-made Mbed Linux downloads. Normally, this could be easily solved by adding the desired packages to the Yocto build scripts and rebuilding from source…however, there is one additional and very unique quirk to this project: We have decided to exclusively use Arm-powered Windows on Snapdragon laptops to build the project!  These laptops are highly efficient, with all-day battery life and far better performance than previous generations offered.  One limitation however, is they are currently unable to run Docker, which we would need to re-build Mbed Linux from source.  Thus, instead of adding the necessary packages to Yocto and recompiling, we instead had to manually port Device Tree functionality, gain access to the GPIO pins on the Pi, enable the I2C bus and tooling, and finally expose that functionality from the host OS to the container, all by way of manually lifting from Raspbian.  Obviously, we placed this limitation upon ourselves, but it does demonstrate that there are still a few shortcomings in the developer experience on Arm.

A second valuable lesson learned is with the native Mbed tooling for initially deploying devices.  Provisioning and updating devices with Pelion Device Management is a straightforward process, except for one small but critical hiccup we experienced.  It is worth noting here again that Mbed Linux OS is in a Technical Preview status, and the feedback we were able to give to the Mbed team as a result of this process has been incorporated and will make the final product even better!  However, when following the documentation to provision devices for the first time, a Developer Certificate is issued. That certificate is only valid for 90 days, and after that time you can no longer push containers to a device in the field. That Certificate can certainly be updated via the re-provisioning process, but, you must be on the same network as the device in order to perform that action. Our devices are already out in the field, so that is not possible at this point.  Thus, we have a fleet of devices that cannot receive their intended application.  On the plus side, this exercise proved it’s worth by highlighting this point of failure, and resulted in the valuable documentation update so that your project can be a success!

Conclusion

In the end, we were able to successfully provision just a few devices that we still had local access to, and prove that the theory was sound and demonstrate a functional prototype at Arm TechCon!

Using a pair of freshly provisioned Raspberry Pi’s, the containerized application was pushed Over The Air to them, via the Mbed CLI.  Pelion showed the devices as Online, and the device and application logs in the Dashboard reported the container was started successfully.  Sure enough, on the Edge Node, data began streaming in, and the MQTT Broker began taking those transmissions, translating them to Influx, and sending them upstream to the Cloud Server.  Logged into Grafana running on the Cloud Server, that data could then be inspected and visualized.

Thus, while it wasn’t quite as geographically diverse as hoped, we did actually accomplish what we set out to do, which was build an end-to-end IoT, Edge, and Cloud infrastructure running entirely on Arm!  The data that is flowing is certainly just a minimal example, but as a proof-of-concept we can truthfully say that the design is valid and the system works!  Now, we’re excited to see what you can build to bring Simon Segar’s vision of the connected future to life!

Posted on 1 Comment

Raspberry Pi 4 AI Server Now Available

As pioneers in the Arm micro server ecosystem, miniNodes has been an innovator and leading expert in the use of small devices to fulfill compute capacity at the Edge, watched as IoT has matured and impacted all industries, and is now witnessing AI and Machine Learning depart the Cloud and instead be performed on-device or at the Edge of the network. More and more phones, home assistant devices (such as Echo), and even laptops are including custom AI hardware accelerator chips designed to handle voice recognition, gesture and motion control, object detection, analyze video and camera feeds, and perform many other deep learning tasks.

The AI models and algorithms that make this happen have to be trained and tested on specialized hardware accelerators as well, and historically that has been very expensive to perform in the cloud. miniNodes is taking a different approach however, and pairing custom hardware AI Accelerators with cost effective Raspberry Pi 4 servers, to lower the cost of testing and training these models while still maintaining high levels of performance. Deep learning, neural network, and matrix multiplication activities can be offloaded to the AI hardware, rapidly accelerating the model training.

The first product to launch in the new miniNodes AI Server lineup is a Raspberry Pi 4 server combined with a Gyrfalcon 2801 NPU, for a maximum of 5.6 TOP/s of dedicated AI processing power. In the future, we will expand the lineup to include Google Coral and Intel Movidius hardware as well.

The Raspberry Pi has always been one of the most popular hosted Arm Servers at miniNodes, even as far back as the original Raspberry Pi (1) Model B, some of which are still running! Over the years, we upgraded to Raspberry Pi 2’s, 3’s, and the 3+. So, it was only a matter of time until we deployed new, faster Raspberry Pi 4 servers.

However, with the launch of the Pi 4 and its increased capabilities, we decided it was time to upgrade our infrastructure, management, and backend systems to match. That work is actually still ongoing, but in order to start testing the ability to run AI workloads, we have made a few units available for early adopters to begin testing their ML models. If you are looking for a cost effective way to get started with AI processing or are interested in testing AI/ML on Arm cores, this is a great way to get started. Check out the new miniNodes AI Arm Server here: https://www.mininodes.com/product/raspberrypi-4-ai-server/

And if you have any questions, just drop us a note at info@mininodes.com!

Posted on Leave a comment

ArmTechCon Recap

As you may have seen here and here, miniNodes recently got invited to participate at ArmTechCon, inside Arm’s own “Innovation Pavilion” in the Expo Hall.  Because our core business of hosting tiny Arm Servers isn’t that exciting to show off, especially at the biggest Arm ecosystem event of the year, we partnered with Robert Wolff and the awesome team at 96Boards to come up with something a bit more intriguing.   🙂

After some back and forth, we landed on a solar powered, connected, mobile developer and edge computing platform. The idea was to build a self-contained and self-powered box that could be taken out and used in geographically isolated areas, that could still have connectivity back to a central cloud provider. The actual use cases could vary dramatically, but the common theme is that there is a lack of infrastructure, electricity, or wifi in the targeted region. The box would be powered by solar panels for this iteration, but could also accept other renewable sources such as wind, hydroelectric via a waterwheel or impeller, geothermal, or more.

So, as one potential use case, we envisioned using the box in remote villages or locales that don’t have the typical infrastructure needed to teach development, AI, machine learning, edge computing, remote code or container deployment, or other advanced computer science topics.

The end goal is to provide everything as open source, with a Bill of Materials and instructions for anyone to replicate the build, using readily available, off-the-shelf parts with no customization necessary. For the demo unit though, the project hasn’t made it quite that far yet.  For this prototype, the box consisted of a foldable solar panel array, that was hooked up to a charge controller, which then fed a battery pack. The battery pack was run over to an inverter, so that we could power multiple standard devices. The first device to be powered was a 96Boards Dragonboard, that had a small LCD attached for graphical output, and had a 4G LTE cellular mezzanine which provided data to the Dragonboard.  This, as long as there is cell service, the Dragonboard has connectivity to the internet!  At that point, we had effectively built a solar powered, self sustaining compute workstation that could connect to the internet nearly anywhere!

However, because we were just doing a proof of concept, we thought it would be fun to go even one step further!  Next, we setup sharing on the Dragonboard’s cellular connection, and ran an ethernet cable out from the Dragonboard over to a Raspberry Pi 3 Compute Module.  This Pi was running a service from Microsoft called Azure IoT Edge, which is a product that allows you to remotely push containers and code to an IoT device, or receive data and telemetry back from a device out in the wild.  Thus, as long as there is adequate sunlight (or another renewable source of power) and cell coverage, the box can be remotely monitored and even updated from anywhere.  Or, thanks to its LCD and USB keyboard, it can be used as a workstation in places where infrastructure is lacking.

Another potential use case for the platform could be as an environmental monitoring solution. When equipped with a gyroscope, the box could detect movements from events such as a rock slide, avalanche, mud slide, volcanic activity, etc. Any anomoly can be reported back to the central servers immediately for analysis.

When equipped with a camera, the box could also visually monitor the environment, and detect changes in imagery such as a smoke plume for early forest fire detection, wildlife movement, vehicles approaching locations where there should not be any, or more.

Finally, because of the device’s Raspberry Pi Compute Module carrier board, the box has the ability to run targeted workloads of its own, for extreme edge computing. The workloads can be updated, changed, and monitored remotely, again due to the Dragonboard’s cellular connectivity to the Microsoft Azure IoT Edge platform.

ArmTechCon was a big success, and it’s incredible what can be built using Arm technology.  Be sure to check back for status updates as the solar compute box undergoes future development and iterations!

Posted on Leave a comment

ARM Server Update, Summer 2018

Continuing our quarterly ARM Server update series, it is now Summer 2018 so it is time to review the ARM Server news and ecosystem updates from the past few months!  This blog series only covers the ARM Server highlights, but for more in-depth ARM Server news be sure to check out the Works on Arm Newsletter, delivered every Friday by Ed Vielmetti!

Looking at our recent blog posts, the most important headline seems to be the rumored exit from the business by Qualcomm.  Although, at the moment, this has not been confirmed, if true it would be a major setback for ARM Servers in the datacenter.  The Qualcomm Centriq had been shown to be very effective by CloudFlare for their distributed caching workload, and had been shown by Microsoft to be running a portion of the Azure workload as well.

However, just as Qualcomm is rumored to be exiting, Cavium has released the new ThunderX2 to general availability, and several new designs have now been shown and are listed for sale.  The ThunderX2 processor is a 32-core design that can directly compete with Xeons, and provides all of the platform features that a hyperscaler would expect.

Finally, in software news, Ubuntu has released it’s latest 18.04 Bionic Beaver release, which is an LTS version, thus offering 5 years of support.  As in the past, there is an ARM64 version of Ubuntu, which should technically work on any UEFI standard ARM Server.  Examples include Ampere X-Gene servers, Cavium ThunderX servers, Qualcomm, Huawei, HP Moonshot, and AMD Seattle servers.

As always, make sure to check back for more ARM Server and Datacenter industry news, or follow us on Twitter for daily updates on all things ARM, IoT, single board computers, edge computing, and more!

 

Posted on 1 Comment

Prototype Raspberry Pi Cluster Board

The first samples of the miniNodes Raspberry Pi Cluster Board have arrived, and testing can now begin!

Thanks to the very gracious Arm Innovator Program, miniNodes was able to design and build this board with the help of Gumstix!  The design includes 5 Raspberry Pi Compute Module slots, an integrated Ethernet Switch, and power delivered to each node via the PCB.  All that is required are the Raspberry Pi 3 CoM’s, and a single power supply to run the whole cluster.

The second revision of the board is now complete (added a power LED, Serial Port header, and individual on/off switches), and pre-orders are underway here:  https://www.mininodes.com/product/5-node-raspberry-pi-3-com-carrier-board/

mininodes-raspberry-pi-cluster-board

Posted on Leave a comment

miniNodes ARM Innovators Program Interview

The full Arm Innovators Program interview is now posted, and we are proud to be highlighted by Arm for our innovations in the Arm Server ecosystem!

As you can see, we are currently prototyping a Raspberry Pi Cluster PCB that will hold 5 Raspberry Pi Computer on Module (CoM) boards, with a power input and ethernet switch built in.

This Raspberry Pi Cluster Board will allow the Docker, Kubernetes, OpenFasS, Minio, and other cluster projects to easily develop, test, and build their software in a cheap and convenient way, with no cabling mess.  Home automation, IoT, and hardware hacking are other potential uses for the board.

We’re still a few weeks away from launching, but keep watching this space as we will be sure to make an announcement as soon as it is ready!

mininodes-arm-innovator

Posted on Leave a comment

Fedora IoT Edition Approved

The Fedora Council has authorized a new Fedora Edition (as opposed to a Spin), dedicated to IoT devices and functionality!  Fedora ARM developer Peter Robinson is heading up the effort, congratulations to him!  He has information available on his blog located here:  https://nullr0ute.com/2018/03/fedora-iot-edition-is-go/, and there is also an official Ticket capturing the Approval located here:  https://pagure.io/Fedora-Council/tickets/issue/193

The Wiki is just getting built out now, so there is not a whole of information on it quite yet, but keep checking back as it takes shape:  https://fedoraproject.org/wiki/Objectives/Fedora_IoT