Posted on 17 Comments

How to Run Rosetta@Home on Arm-Powered Devices

How to Run Rosetta@Home on Arm-Powered Devices

This week, after an amazing Arm community effort, the Rosetta@Home project released support for sending work units to 64-bit Arm devices, such as the Raspberry Pi 4, Nvidia Jetson Nano, Rockchip RK3399-based single board computers, and other SBC’s that have 2gb of memory or more.

Sahaj Sarup from Linaro, the Neocortix team, Arm, and the Baker Lab at the University of Washington all played in role helping us port the Rosetta software to aarch64, get it tested in their Ralph (Rosetta ALPHa) staging environment, validate the scientific results, and eventually push it to Rosetta@Home.

Now, anyone with spare compute capacity on their Arm-powered SBC’s running a 64-bit OS can help contribute to the project by running BOINC, and crunch data and perform protein folding calculations that help doctors target the COVID-19 spike proteins (among other medicine and scientific workloads).

Here is a quick tutorial on how to get started, using a native operating system for your devices.  This methodology is not the only way to run Rosetta@Home, but, is intended for the technical users who want to run their own OS and manage the system themselves.

Raspberry Pi 4

To fight Covid-19 using a Raspberry Pi 4, you need a Raspberry Pi 4 with 2gb or 4gb of RAM.  The Rosetta work units are large scientific calculations, and they require 1.9gb of memory to run.  You will need to use a 64-bit OS for this, so Raspbian will not work, as it is a 32-bit OS.  Instead, you will need to download and flash Ubuntu Server from their official sources, located here:  https://ubuntu.com/download/raspberry-pi.  Once the SD Card is written, and your Pi 4 has booted up, connect an ethernet cable, and be sure to run ‘sudo apt-get update && sudo apt-get upgrade’ to make sure the system is up to date.  At this point a reboot may be necessary, and once the system comes back up, we can start to install BOINC and Rosetta.  Run ‘sudo apt-get install boinc-client boinctui’ to bring in the BOINC packages.  If you are using a 2gb RAM version of the Pi 4, we need to override one setting to cross that 1.9gb threshold mentioned earlier.  If you have a 4gb RAM version of the Pi 4, you can skip this next item.  But, 2gb users, you will need to type ‘sudo nano /var/lib/boinc-client/global_prefs_override.xml’ and enter the following to increase the default memory available to Rosetta to the maximum amount of memory on the board:

<global_preferences>
   <ram_max_used_busy_pct>100.000000</ram_max_used_busy_pct>
   <ram_max_used_idle_pct>100.000000</ram_max_used_idle_pct>
   <cpu_usage_limit>100.000000</cpu_usage_limit>
</global_preferences>

 Press “Control-o” on the keyboard to save the file, and then press Enter to keep the file name the same.  Next, press “Control-x” to quit nano.

Next, using your desktop or laptop PC, head to http://boinc.bakerlab.org and create an account, and while there, be sure to join the “crunch-on-arm” team!  

Back on the Raspberry Pi, we can now run ‘boinctui’ from the command prompt, and a terminal GUI will load.  Press F9 on the keyboard, to bring down the menu choices.  Navigate to the right, to Projects.  Make sure Add Project is highlighted, and press Enter.  You will see the list of available projects to choose from, choose Rosetta, select “Existing User” and enter the credentials you created on the website a moment ago.  

It will take a moment, but, Rosetta will begin downloading the necessary files and then download some work units, and begin crunching data on your Raspberry Pi 4!

You can press ‘Q’ to quit boinctui and it will continue crunching in the background.

 

Nvidia Jetson

If you have an Nvidia Jetson Nano, you can actually follow the same directions outlined above directly on the Nvidia-provided version of Ubuntu.  To recap, these are the steps:

  • Open a Terminal, and run ‘sudo apt-get update && sudo apt-get upgrade’.  After that is complete, reboot.
  • Using your desktop or laptop PC, head to http://boinc.bakerlab.org and create an account, and join the “crunch-on-arm” team
  • Back on the Jetson Nano, run ‘sudo apt-get install boinc-client boinctui’
  • Run ‘boinctui’, press F9, navigate to Projects, Add Project, and choose Rosetta@Home.  Choose an Existing Account, enter your credentials, and wait for some work units to arrive!

 

Other Boards

If you have other single board computers that are 64-bit, and have 2gb of RAM, that run Armbian, the process is the same for those devices as well!  Examples of boards that could work include Rockchip RK3399 boards like the NanoPi M4 or T4, OrangePi 4, or RockPro64, Allwinner H5 boards like the Libre Computer Tritium H5 or NanoPi K1 Plus, or AmLogic boards like the Odroid C2, Odroid N2, or Libre Computer Le Potato.  Additionally, 96Boards offers high performance boards such as the HiKey960 and HiKey970, Qualcomm RB3, or Rock960 that all have excellent 64-bit Debian-based operating systems available.

For any of those, simply install the ‘boinc-client’ and ‘boinctui’ packages, and add the Rosetta project!

Of course, if you just so happen to have a spare Ampere eMAG, Marvell ThunderX or ThunderX2 laying around, those would work quite nicely as well.

Posted on Leave a comment

Running AI Workloads on Arm Servers

Arm’s Role in Processing AI Workloads

The past several years have seen enormous gains in Artificial Intelligence, Machine Learning, Deep Learning, Autonomous Decision Making, and more.  The availability of powerful GPUs and FPGAs both on-premise and in the cloud for several years now have certainly helped, but more and more of this AI processing is actually being done at the Edge, in small devices.  The popularity of Amazon Alexa, Google Home, and AI-enabled features in smartphones such as Apple’s Siri has skyrocketed over the past few years.  The various frameworks and models such as Tensorflow, PyTorch, Caffe, and others have matured, and newer, lightweight versions have come along such as TinyML, TensorflowLite, and other libraries designed to allow machine learning in the smallest devices possible.  Local processing of audio and detecting specific sounds via wavelength pattern matching, object recognition in a camera’s frame, motions and gestures being monitored and observed, and vehicle safety systems that detect and respond immediately to changing conditions with no human intervention are some of the most common applications.

The work that it takes to develop these AI models is very specialized, but ultimately algorithms are created, a large sample of training data is fed in to the system, and a model is developed that has a confidence factor and accuracy value.  Once the model is deployed, real-time stream processing occurs, and actions can be taken based upon the results of data flowing through the application.  In the case of a computer vision application for example, identifying certain objects  can result in alerts (hospital staff notified), corrective actions (apply the brakes immediately), or data stored for later use.
As mentioned, more and more AI/ML is actually being processed at the Edge, on small form factor devices.  And, small form factor devices tend to be powered by Arm SoCs, as opposed to the more power hungry x86 designs commonplace in laptops and desktops.  Home devices like Alexa, Google Home, and nearly all smartphones are based on Arm SoCs.  Thus, AI models need to be created, tested, and compatibility verified for Arm powered devices.  Even if an algorithm is developed and trained on a big GPU or FPGA, the resulting model should still be tested on Arm SoC’s to ensure proper functionality.  In order to help speed the testing process, miniNodes now offers hosted Arm microservers with dedicated AI accelerators, that can assist with offloading AI tasks from the CPU and offer excellent machine learning performance.  Testing of self-driving vehicle object detection, navigation and guidance, and actions / behavior models, image classification and object recognition from cameras and video streams, convolutional neural networks, and matrix multiplication workloads, robotics, weather simulation, and many types of deep learning activities can be quickly and easily processed.

Arm Lowers the Cost of AI Processing

AI training and inference in the cloud running on Arm microservers at miniNodes also offers a distinct cost advantage over Amazon AWS, Microsoft Azure, or Google GCE.   Those services can very quickly cost thousands of tens of thousands of dollars per month, but many AI workloads can get by just fine with more modest hardware when paired with a dedicated AI accelerator like a Google Coral TPU, Intel Movidius NPU, or Gyrfalcon Matrix Processing Engine.  AWS, Azure, and GCE provide great AI performance, sure, but you also pay heavily for the processor, memory, storage, and other components of the overall system.  If you are ready to make use of those immense resources, wonderful.  But if you are just starting out, are just learning AI/ML, are only beginning to test your AI modeling on Arm, or just have a lightweight use case, then going with a smaller underlying platform while retaining the dedicated AI processing capability can make more sense.

miniNodes is still in the process of building out the full product lineup, but in the meantime Gyrfalcon 2801 and 2803 nodes are online and ready, with up to 16.8 TOPs of processing for ResNet, MobileNet, or VGG models.  They are an easy, cost effective way to get started with AI processing on Arm!

Check them out here:  https://www.mininodes.com/product/raspberrypi-4-ai-server/

Posted on Leave a comment

Recap: Building an Arm-Powered IoT, Edge, and Cloud Infrastructure

Intro

At Arm’s annual TechCon event in San Jose, Arm CEO Simon Segars presented a vision of the future where a trillion connected devices interact seamlessly with each other and pass data between the Cloud, the Edge, and the Internet of Things, at a scale unimaginable even just a few years ago. Self driving cars will generate massive amounts of sensor information and data, 5G wireless will enable increased connection speeds and reduced latency, and artificial intelligence will provide scientific breakthroughs in materials, technologies, medicines, and energy. This vision of the future state of the connected world is something we have heard about for several years now, with countless written articles, interviews, social media posts, conference talks, and various other forms of media addressing the topic.

However, when seeking out real-world examples of this architecture in practice to help learn and understand how the bits and pieces work together, we came up empty. There were no purpose-built sample projects, pre-written code examples, or other working prototypes of these principles available. Surely there are some internal, private teams building out this type of infrastructure for specific use-cases and organizational needs, but there were no public / open projects to learn from.

Thus, it was time to take action, and build a prototype infrastructure ourselves! With the help of the Arm Innovator Program, we set out on a journey to develop a proof-of-concept that encapsulates as many of these concepts as possible, leveraging currently available technologies and showcasing Arm’s diverse portfolio of products and ecosystems. With help from the Works on Arm program via Packet.com, we began brainstorming.  Our goal was to deploy IoT endpoints to a handful of locations around the world, and capture environmental data via sensors on those devices. From there, we wanted to feed that data to a local Edge Server, which would be responsible for translating the data to a usable format and sending it further upstream, to a Cloud Server functioning as a data warehouse and visualization platform.

In this article we’ll take an in-depth look at the project, and detail the key technologies to give a better idea of what this kind of system entails. I’ll also provide a summary of our lessons learned, which hopefully help you to build and iterate faster, and avoid some potential pitfalls along the way.

Design

When thinking about the design of this project, we wanted to keep things simple, as the purpose of this exercise it to demonstrate capability and build a proof-of-concept, but not an actual product shipped to real, paying customers.  Thus, we made hardware and software selections based on cost and availability, as opposed to “most appropriate” for the intended use. We also knew we would have relatively small data-sets, and reliable power and internet connectivity for all of our devices.  Your real-world IoT deployments may not have these luxuries, so, your hardware and software selections may not be as straightforward as ours were.  Many IoT projects have to be tolerant of lost network connectivity, unreliable power delivery, or harsh environmental conditions.  But we were fortunate to have consistent power and internet.  Let’s go through our inventory of Arm-powered hardware and software, keeping in mind the rather ideal conditions we’ve got:

1. IoT Endpoints

Hardware

  • Raspberry Pi 3B+
  • Sparkfun Qwiic HAT
  • Sparkfun Lightning Detector
  • Sparkfun Environmental Combo Sensor
  • Sparkfun GPS Sensor

Software

  • Arm Mbed Linux OS
  • Arm Pelion Device Management

 

2. Edge Nodes

Hardware

  • Linaro / 96Boards HiKey, and HiKey 960

Software

  • Linaro Debian Linux

 

3. Cloud Server

Hardware

  • Ampere eMAG, hosted by Packet.com

Software

  • Debian Linux
  • InfluxDB
  • Grafana

 

As you can see, we have made some selections that fit our small project well, but as mentioned may not be suitable for all IoT use cases depending on your project’s environmental conditions.  However, let’s start detailing the items, beginning with the IoT Endpoint.  We’re using a Raspberry Pi 3B, a Sparkfun Qwiic HAT, and Sparkfun sensors to capture Temperature, Humidity, Barometric Pressure, CO2, and tVOC (volatile organic compounds).  We have lightning detection capability (currently not being used, but, available) as well, and GPS so that we can determine precisely where the Endpoint is located.  As for software, because these devices are out in the wild, scattered literally across the globe, we needed a framework to allow remote monitoring, updating, and application deployment.  Arm Mbed Linux OS is a lightweight, secure, container-based operating system that meets these requirements.  It is currently still in Technical Preview, but is far enough along in development that it meets our project needs and is working great.  A total of 10 Raspberry Pi Endpoints were built and sent around the globe, with several across the United States, as well as Cambridge, Budapest, Delhi, Southern India, and one spare unit left over for local testing.

Turning to our Edge Nodes, these are the simplest component in our project’s infrastructure. These are 96Boards devices, chosen for their support and ease-of-use.  Linaro and the 96Boards team do an excellent job of building ready-made Debian images with updated kernels, applications, and drivers for their hardware, making for a great out-of-the-box experience. Two of these devices are currently provisioned, one in India and one in the United States, each serving their geographic region. The devices aggregate the IoT Endpoint data stream, convert it to the format needed by the Cloud Server, and publish the data to the Cloud.

Finally, the Arm-powered Cloud Server is an Ampere eMAG server, hosted by Packet.com. It is an enterprise-grade machine, and functions as the data warehouse for all of the IoT data, as well as a visual platform for charting and viewing the data in a time-series fashion thanks to InfluxDB and Grafana. Packet.com has datacenters around the world, and their native tooling and user interface make deploying Arm Servers quick and easy.

Now that the system architecture has been described, let’s take a look at the application architecture, and start to dissect how data flows from the IoT Endpoints, to the Edge, to the Cloud. As mentioned, Mbed Linux OS is a container-based OS, which is to say that it is a minimal underlying operating system based on Yocto, providing a small, lightweight, secure foundation to which the Open Container Initiative (OCI) “RunC” container engine is added.  RunC can launch OCI compliant containers built locally on your laptop, then pushed to the Endpoint via the Mbed Linux tooling, no matter where the device is located.  In our particular case, we chose a small Alpine Linux container, added Python, added the Sparkfun libraries for the sensors, and created a small startup script to begin reading data from the sensors when the container starts.  The container also has an MQTT broker in it, which is responsible for taking that sensor data, turning it into a small JSON snippet, and publishing it to a specific known location (the Edge Server).

The Edge Servers are a more traditional Debian operating system, with Python installed as well.  There is a Python script running as a daemon that captures and parses the incoming MQTT from IoT Endpoints, converts it to an InfluxDB formatted query, and publishes it to the specified Influx database, which is running on the Ampere eMAG Cloud Server.

Finally, the Cloud Server is an enterprise-grade Ampere eMAG Arm Server.  It is graciously hosted by the Works on Arm project at Packet.com, in their New Jersey datacenter. This server is also running Debian, and has InfluxDB and Grafana installed for storage and visualization of the data being sent to it from the Edge Nodes.  Thus, our IoT, Edge, and Server are all Arm-powered!

Construction Challenges

Building a container to hold our application did prove more challenging then anticipated, as a result of some needed functionality not provided by the ready-made Mbed Linux downloads. Normally, this could be easily solved by adding the desired packages to the Yocto build scripts and rebuilding from source…however, there is one additional and very unique quirk to this project: We have decided to exclusively use Arm-powered Windows on Snapdragon laptops to build the project!  These laptops are highly efficient, with all-day battery life and far better performance than previous generations offered.  One limitation however, is they are currently unable to run Docker, which we would need to re-build Mbed Linux from source.  Thus, instead of adding the necessary packages to Yocto and recompiling, we instead had to manually port Device Tree functionality, gain access to the GPIO pins on the Pi, enable the I2C bus and tooling, and finally expose that functionality from the host OS to the container, all by way of manually lifting from Raspbian.  Obviously, we placed this limitation upon ourselves, but it does demonstrate that there are still a few shortcomings in the developer experience on Arm.

A second valuable lesson learned is with the native Mbed tooling for initially deploying devices.  Provisioning and updating devices with Pelion Device Management is a straightforward process, except for one small but critical hiccup we experienced.  It is worth noting here again that Mbed Linux OS is in a Technical Preview status, and the feedback we were able to give to the Mbed team as a result of this process has been incorporated and will make the final product even better!  However, when following the documentation to provision devices for the first time, a Developer Certificate is issued. That certificate is only valid for 90 days, and after that time you can no longer push containers to a device in the field. That Certificate can certainly be updated via the re-provisioning process, but, you must be on the same network as the device in order to perform that action. Our devices are already out in the field, so that is not possible at this point.  Thus, we have a fleet of devices that cannot receive their intended application.  On the plus side, this exercise proved it’s worth by highlighting this point of failure, and resulted in the valuable documentation update so that your project can be a success!

Conclusion

In the end, we were able to successfully provision just a few devices that we still had local access to, and prove that the theory was sound and demonstrate a functional prototype at Arm TechCon!

Using a pair of freshly provisioned Raspberry Pi’s, the containerized application was pushed Over The Air to them, via the Mbed CLI.  Pelion showed the devices as Online, and the device and application logs in the Dashboard reported the container was started successfully.  Sure enough, on the Edge Node, data began streaming in, and the MQTT Broker began taking those transmissions, translating them to Influx, and sending them upstream to the Cloud Server.  Logged into Grafana running on the Cloud Server, that data could then be inspected and visualized.

Thus, while it wasn’t quite as geographically diverse as hoped, we did actually accomplish what we set out to do, which was build an end-to-end IoT, Edge, and Cloud infrastructure running entirely on Arm!  The data that is flowing is certainly just a minimal example, but as a proof-of-concept we can truthfully say that the design is valid and the system works!  Now, we’re excited to see what you can build to bring Simon Segar’s vision of the connected future to life!

Posted on 1 Comment

Raspberry Pi 4 AI Server Now Available

As pioneers in the Arm micro server ecosystem, miniNodes has been an innovator and leading expert in the use of small devices to fulfill compute capacity at the Edge, watched as IoT has matured and impacted all industries, and is now witnessing AI and Machine Learning depart the Cloud and instead be performed on-device or at the Edge of the network. More and more phones, home assistant devices (such as Echo), and even laptops are including custom AI hardware accelerator chips designed to handle voice recognition, gesture and motion control, object detection, analyze video and camera feeds, and perform many other deep learning tasks.

The AI models and algorithms that make this happen have to be trained and tested on specialized hardware accelerators as well, and historically that has been very expensive to perform in the cloud. miniNodes is taking a different approach however, and pairing custom hardware AI Accelerators with cost effective Raspberry Pi 4 servers, to lower the cost of testing and training these models while still maintaining high levels of performance. Deep learning, neural network, and matrix multiplication activities can be offloaded to the AI hardware, rapidly accelerating the model training.

The first product to launch in the new miniNodes AI Server lineup is a Raspberry Pi 4 server combined with a Gyrfalcon 2801 NPU, for a maximum of 5.6 TOP/s of dedicated AI processing power. In the future, we will expand the lineup to include Google Coral and Intel Movidius hardware as well.

The Raspberry Pi has always been one of the most popular hosted Arm Servers at miniNodes, even as far back as the original Raspberry Pi (1) Model B, some of which are still running! Over the years, we upgraded to Raspberry Pi 2’s, 3’s, and the 3+. So, it was only a matter of time until we deployed new, faster Raspberry Pi 4 servers.

However, with the launch of the Pi 4 and its increased capabilities, we decided it was time to upgrade our infrastructure, management, and backend systems to match. That work is actually still ongoing, but in order to start testing the ability to run AI workloads, we have made a few units available for early adopters to begin testing their ML models. If you are looking for a cost effective way to get started with AI processing or are interested in testing AI/ML on Arm cores, this is a great way to get started. Check out the new miniNodes AI Arm Server here: https://www.mininodes.com/product/raspberrypi-4-ai-server/

And if you have any questions, just drop us a note at info@mininodes.com!

Posted on Leave a comment

Arm Server Update, Summer 2019

It has been a while since our last Arm Server update, and as usual there has been a lot of changes, forward progress, and new developments in the ecosystem!

The enterprise Arm Server hardware is now mostly consolidated around the Cavium ThunderX2 and Ampere eMag products, available from Gigabyte, Avantek and Phoenics Electronics. Each can be purchased in 1U, 2U, and 4U configurations ready for the datacenter, and high performance developer workstations based on the same hardware are available, as well. Both of these solutions can be customized with additional RAM, storage, and networking, to best fit the intended workload.

Another option that exists, but is difficult to obtain in the United States, is the Huawei 1620, also known as the Kunpeng 920. These servers are also enterprise grade servers ready to be installed in a datacenter environment, typically in a 2U chassis with configurable memory and storage options. However, availability outside of Asia is limited, and new regulations may make importing them difficult.

While the Cavium, Ampere, and (potentially) Huawei servers are available as bare-metal options shipped directly to you for installation in your own datacenter, Amazon has also made significant progress over the past few months and is rapidly becoming the most popular Arm Server provider. They use their own Arm Server CPU called the Gravitron, that they use in their own proprietary AWS A1 ECS instances. This is quickly becoming the best way to deploy Arm Servers, as the entire system is in the Cloud and no hardware has to be purchased. They come in various sizes and price ranges, and experienced developers organizations who are familiar with the AWS system can simply pay by the hour for temporary workloads. For users who are less familiar with the ECS dashboard, less comfortable with the fluctuations in billing model, or prefer a fixed rate, we at miniNodes offer pre-configured Arm VPS servers in a range of sizes and prices, hosted atop the AWS platform.

Finally, the Edge of the network continues to be where a lot of innovation is occurring, and Arm Servers are a perfect fit for deplopyment as Edge Servers, due to their low power consumption, cost-effectiveness, and wide range of size and formats. The MacchiattoBin has been demonstrated running workloads in the base of windmills, the new SolidRun Clearfog ITX is promising to be a flexible solution, and the new Odroid N2 is an intersting device that has “enough” performance to satisfy a wide range of workloads that don’t need to always rely on the Cloud, and can instead deliver services and data to end-users (or other devices) faster by being located in closer proximity to where compute is needed.

As always, check back regularly for updates and Arm Server news, or follow us on Twitter where we share Arm related news on a daily basis!

Posted on 4 Comments

64-bit Ubuntu Raspberry Pi 3 Arm Server Image Now Available

This morning there is some great news for fans of the popular Raspberry Pi 3 single board computer, looking to run 64-bit Ubuntu Arm Server on their board!

 

The Ubuntu team, with support from Arm, has released a ready-made image that can be written to an SD Card and directly booted on a Raspberry Pi 3B or 3B+, with no configuration necessary.  We were able to give this image a test, and although it is technically considered a beta, it seems most everything is working and all of the standard functionality one would expect from Ubuntu Server intact!

 

You can download the image here:  http://cdimage.ubuntu.com/releases/18.04/release/

How to Install Ubuntu on the Raspberry Pi 3

Once the image is downloaded, it needs to be extracted, and can then be written to an SD Card.  Of course, the higher the read and write speed of the SD Card, the better overall system performance will be.

 

After getting the image written and inserted in to the Pi, take note that the first boot may take a few minutes while the OS goes through a few setup routines.

 

A quick run through the system showed the basic console hardware requirements of HDMI, USB, and Ethernet all worked out of the box, as well as WiFi.  SSH is enabled and working, and normal software installation and updating via ‘apt’ package management is working great.  As an added bonus, the image comes with ‘cloud-init’ setup to automatically expand the partition on the SD Card to the maximum capacity of the card, generate SSH keys, configure networking for the LXD container runtime (which is also preinstalled), and finally force a password change upon first login to the system.

 

All said, this means the Ubuntu Arm Server image is ready to use immediately upon writing the SD Card and booting the Pi!

 

In the past, it was technically possible to bootstrap a system using a custom built kernel and an Ubuntu rootfs, then add Pi-specific firmware and drivers.  After that you had to add users, manually install networking, and add even basic system utilities.  That process required in-depth knowledge of system installation and configuration, and was not something most users could tackle on their own.  However, thanks to the efforts of the Ubuntu Arm team in creating this new ready-made image, no advanced knowledge of the Linux build process is required, and even casual Raspberry Pi users can be up and running easily!

 

One final thing to keep in mind, is that this image is fully intended to be a 64-bit Ubuntu Arm Server platform!  Use cases such as File or Print servers, DNS, MySQL or other database servers, web front-end caching, or other lightweight services all make sense for this platform.  It can also be used for installation and testing of Aarch64 software, developing and compiling Arm64 applications, exploring containers, or even production workloads where possible!  Small, distributed compute workloads, IoT services, Industrial Internet of Things, environmental monitoring, remote compute capacity in non-traditional settings, or many other uses cases are all possible.  While a desktop *can* be installed, due to the limited memory on the Raspberry Pi, only a lightweight desktop like LXDE or XFCE will truly work, with both Mate and Gnome quickly running out of memory, moving to Swap, and then slowing the system to a crawl.   Even so, desktop performance in this image is not optimized, so sticking with the intended use of this image as a Server OS makes the most sense.

 

In summary, thanks to a collaborative effort from Arm and the Ubuntu teams, the community now has a ready-made Raspberry Pi 3B(+) 64-bit Ubuntu Arm Server image!
Posted on Leave a comment

ARM Server Update, Summer 2018

Continuing our quarterly ARM Server update series, it is now Summer 2018 so it is time to review the ARM Server news and ecosystem updates from the past few months!  This blog series only covers the ARM Server highlights, but for more in-depth ARM Server news be sure to check out the Works on Arm Newsletter, delivered every Friday by Ed Vielmetti!

Looking at our recent blog posts, the most important headline seems to be the rumored exit from the business by Qualcomm.  Although, at the moment, this has not been confirmed, if true it would be a major setback for ARM Servers in the datacenter.  The Qualcomm Centriq had been shown to be very effective by CloudFlare for their distributed caching workload, and had been shown by Microsoft to be running a portion of the Azure workload as well.

However, just as Qualcomm is rumored to be exiting, Cavium has released the new ThunderX2 to general availability, and several new designs have now been shown and are listed for sale.  The ThunderX2 processor is a 32-core design that can directly compete with Xeons, and provides all of the platform features that a hyperscaler would expect.

Finally, in software news, Ubuntu has released it’s latest 18.04 Bionic Beaver release, which is an LTS version, thus offering 5 years of support.  As in the past, there is an ARM64 version of Ubuntu, which should technically work on any UEFI standard ARM Server.  Examples include Ampere X-Gene servers, Cavium ThunderX servers, Qualcomm, Huawei, HP Moonshot, and AMD Seattle servers.

As always, make sure to check back for more ARM Server and Datacenter industry news, or follow us on Twitter for daily updates on all things ARM, IoT, single board computers, edge computing, and more!

 

Posted on 1 Comment

Prototype Raspberry Pi Cluster Board

The first samples of the miniNodes Raspberry Pi Cluster Board have arrived, and testing can now begin!

Thanks to the very gracious Arm Innovator Program, miniNodes was able to design and build this board with the help of Gumstix!  The design includes 5 Raspberry Pi Compute Module slots, an integrated Ethernet Switch, and power delivered to each node via the PCB.  All that is required are the Raspberry Pi 3 CoM’s, and a single power supply to run the whole cluster.

The second revision of the board is now complete (added a power LED, Serial Port header, and individual on/off switches), and pre-orders are underway here:  https://www.mininodes.com/product/5-node-raspberry-pi-3-com-carrier-board/

mininodes-raspberry-pi-cluster-board

Posted on Leave a comment

Fedora IoT Edition Approved

The Fedora Council has authorized a new Fedora Edition (as opposed to a Spin), dedicated to IoT devices and functionality!  Fedora ARM developer Peter Robinson is heading up the effort, congratulations to him!  He has information available on his blog located here:  https://nullr0ute.com/2018/03/fedora-iot-edition-is-go/, and there is also an official Ticket capturing the Approval located here:  https://pagure.io/Fedora-Council/tickets/issue/193

The Wiki is just getting built out now, so there is not a whole of information on it quite yet, but keep checking back as it takes shape:  https://fedoraproject.org/wiki/Objectives/Fedora_IoT

 

Posted on Leave a comment

ARM Server Update, Spring 2018

Continuing on with our quarterly updates to the ARM Server ecosystem, as usual there is quite a bit of news to report on!  Let’s dive right in to the analysis!

The Qualcomm Centriq continues to make headlines, with the first design win recently announced.  Hatch, a cloud gaming company, has chosen the Centriq 2400 to power it’s cloud gaming platform.  More information is available here:  https://www.forbes.com/sites/tiriasresearch/2018/02/20/hatch-qdt-cloud-gaming/

Qualcomm is also in the news for another reason as well.  Broadcom, another chip maker, has launched a hostile bid to takeover Qualcomm, although Qualcomm has thus far held off their unwanted pursuit, and is attempting to remain independent.  Consolidation in the chip maker space has been picking up in recent years, with the NXP purchase of Freescale, Intel buying Altera, Macom purchasing Applied Micro, and many more.

Which leads to the next news in the industry:  Macom had recently quietly sold off the Applied Micro assets to a secretively named buyer, known only as Project Denver Holdings.  However, they have now formed a new organization, called Ampere, who will continue on with the development and marketing of the X-Gene line of ARM Server processors.  More info on Ampere can be found here:  https://amperecomputing.com/

Finally, Linaro’s 96Boards team has brought to market a development workstation conforming to their Enterprise Edition standards.  The newly launched workstation features a 24-core Socionext Synquacer SoC, plus a hard drive, memory, and video card to round out the system.  It is currently listed for sale at $1,250, so it is not cheap, but it does fulfill a niched that has been underserved in the market.  It can be purchased here:  http://www.chip1stop.com/web/USA/en/search.do?dispPartIds=SOCI-0000001