Posted on Leave a comment

Installing AlmaLinux 8 on Arm, using a Raspberry Pi 4

A few weeks ago, we took a look at how to install the new Rocky Linux 8 on Arm, using a Raspberry Pi, as a replacement for CentOS.  This is due to Red Hat altering the release strategy for CentOS, transitioning from a stable methodology to more early and rapid development.  However, there is also a second community build aiming to fill the gap left by Red Hat, so today we will look at the process of installing the new AlmaLinux 8, again on the Raspberry Pi 4 with community-built UEFI firmware.

Like Rocky Linux, this new AlmaLinux is a Linux distribution put together by the community, in order to replace the stable, predictable manner in which packages are updated.  AlmaLinux comes in both x86 and aarch64 builds of the OS, and we’ll be using the aarch64 build of course for our Pi.

We’re going to replicate the previous how-to for the most part, so let’s recap the hardware we’ll use:

  • Raspberry Pi 4B
  • SD Card
  • USB stick for install media
  • USB Stick or USB-to-SSD adapter for destination (permanent storage) media

To get started, we are going to download and flash the community-built UEFI firmware for the Raspberry Pi to an SD Card.  This UEFI implementation is closer in nature to a “normal” PC UEFI BIOS, and will cause the Pi to boot a bit more standard than would be achieved with the Raspberry Pi OS method.  The UEFI firmware is placed directly on an SD Card, and when the Pi is powered on it will read the UEFI firmware and can then boot from a USB stick or over the network.  To install the UEFI firmware, download the latest release .zip file (RPi4_UEFI_Firmware_v1.28.zip at the time of this writing) from https://github.com/pftf/RPi4/releases

Next, unzip this .zip file you just downloaded, and copy the contents to an SD Card.  The card needs to be formatted as FAT32, so if you are re-purposing an SD Card that had Linux on it previously you might need to delete partitions and re-create a FAT32 partition on the SD Card.  Once the files are copied to the SD Card, it will look like this:

With the SD Card complete, we can now proceed to download AlmaLinux.  Browse to https://almalinux.org and click on Download.  You will have the option for x86 or aarch64 downloads, obviously we’ll want the Arm64 version so click on that link, and then choose a mirror close to you.  Once you are taken to the mirror’s repository, you’ll see you have Boot, Minimal, or DVD .iso files to choose from.  For this tutorial, we’ll go with minimal, so click on that one and your download will begin.  Once the download is complete, flash the file to a USB stick using Rufus, Etcher, WinDisk32, or any other method you prefer.

Now that we have our SD card for booting and USB stick for installing, we just determine what to use for destination storage.  As the Pi doesn’t have any onboard eMMC, and the SD Card slot is occupied by our firmware, we could use another, separate USB drive, network attached storage, or for this tutorial we’ll actually go with a USB-to-SSD adapter, which will allow us to hook up a 2.5 inch SATA SSD as our permanent storage.

Plug the SSD into the adapter, and then connect the USB plug into one of the USB 3.0 (blue) ports on the Pi.  Attach a keyboard, mouse, and monitor, insert the SD Card, and the USB Stick with AlmaLinux on it, then plug in power.  After a moment you will see a Raspberry Pi logo, and the Pi will boot from the USB stick.  The AlmaLinux installation process will begin, and if you are familiar with the CentOS installation process you will notice it’s nearly identical, since the upstream sources are the same.

AlmaLinux-8-Install

The Raspberry Pi is not as fast as a PC, or a large Arm Server, so you’ll need to be patient while the installation wizard loads and navigating the menus can be a bit slow.  However, you will be able to setup a user account, choose your timezone, and select the destination drive to install to (the SSD).  Once satisfied, you can begin the installation, and again you’ll need to be patient while the files are copied to the SSD.  Make some coffee or tea. 

AlmaLinux-Install-Complete

Once the process does complete, you can reboot the Pi, remove the USB stick so you don’t start the whole process over, and eventually boot into your new AlmaLinux 8.4 for Arm distro!

AlmaLinux-8-Login

Posted on Leave a comment

The Edge AI Server Revolution (Driven by Arm, Of Course)

The past 2 years have seen rapid growth in experimentation and ultimately deployment and adoption of AI/ML at the Edge. This has been fueled by dramatic increases in on-device AI processing capability, and equally dramatic reduction in size and power requirements of devices. The Nvidia Jetson Nano with its GPU and CUDA cores, Google Coral Dev Board containing a TPU for Tensorflow acceleration, and even Microcontrollers running TinyML have quickly gained widespread adoption among developers. These devices are cheap, accurate, and readily available, allowing developers to deploy AI/ML workloads to places that were not practical just a short time ago.

hosted-nvidia-jetson-nano

This allows developers to re-think their applications, and begin to migrate AI workloads out of the datacenter, which was the only place to run their AI/ML tasks previously, potentially saving money or improving performance my moving processing closer to where it is needed. This also allows for net-new capability, adding computer vision, object detection, pose estimation, etc, in places that previously were not possible.

In order to help prepare developers and allow them to experiment and build their skills, miniNodes is making available some Edge AI inspired Arm Servers, starting with the Nvidia Jetson Nano. These nodes are intended to be used by engineers and teams just getting started on their Edge AI journey, who are testing their applications and deep learning algorithms. Another use for an Edge AI Arm Server is for light-duty AI processing, where it doesn’t make financial sense to rent big AI servers from the likes of AWS or Azure, and instead a smaller device will work just fine. Finally, developers and teams that do AI training that is not time-sensitive, or relatively small, can achieve significant savings by using a hosted Jetson Nano for their model training, instead of local GPU’s or AWS resources.

Whether you are just getting starting and beginning to explore Edge AI, or have been following the trend and already have Edge AI projects underway, a miniNodes hosted Jetson Nano is a great way to gain hosted AI processing capability or reduce AWS and Azure cloud AI costs.

Posted on Leave a comment

Arm Server Update, Spring/Summer 2021

As usual, we are overdue for an update on all things Arm Servers! Today’s announcement of the Arm v9 specification is a great time to review the state of Arm Servers, and what has changed since our last update.

First, let’s review our last update. Marvell canceled the ThunderX3 product, Ampere had announced the Altra but it wasn’t shipping, AWS Graviton was available, and Nuvia was designing a processor.

Fast forward to today, and the Ampere Altra’s are now becoming available, with limited stock via the Works on Arm program at Equinix Metal, and some designs shown off by Avantek, a channel supplier. Mt. Snow and Mt. Jade, as they are known, are also formally designated as “ServerReady” parts, passing standards compliance tests.

Nuvia, the startup that was designing a new Arm Server SoC from the ground up, was purchased by Qualcomm, in an apparent re-entry into the Arm Server market (or for use in Windows on Arm laptops?). Don’t forget, they previously had an Arm Server part, the Centriq, though they scrapped it a few years ago. So, it now remains to be seen if Nuvia will launch a server-grade SoC, or pivot to a smaller target-device.

The other emerging trend to cover is the role of Arm in the Edge Server ecosystem, where the trend of pushing small servers out of the datacenter and closer to customers and users is rapidly gaining momentum. In this scenario, non-traditional, smaller devices take on the role of a server, and the energy efficiency, small form-factor, and varied capabilities of Arm-powered single board computers are taking on workloads previously handled by typical 1U and 2U rackmount servers in a datacenter. But, small devices like the Nvidia Jetson AGX, RaspberryPi Compute Module 4, and NXP Freeway boxes are able to perform Edge AI, data caching, or local workloads, and only send what is necessary up to the cloud. This trend has been accelerating over the past 12 – 18 months, so, we may see some more niche devices or SoC’s start to fill this market.

Posted on Leave a comment

Arm Server Update, Fall 2020

The announcement yesterday of the cancelation of Marvel’s ThunderX3 Arm Server processor was a reminder that we were overdue for an Arm Server update!  So, continuing on in our regular series, here is the latest news in the Arm Server ecosystem.

As mentioned, unfortunately it appears at this time that Marvell has canceled the ThunderX3 Arm Server processor that was shown earlier this year, and would have been the successor to the ThunderX and ThunderX2 parts released previously.  The current rumors indicate that perhaps some specialized version of the SoC may survive and be used for an exclusive contract with a hyperscaler, but that means “regular” customers will not be able to acquire the part.  And with no general purpose, general availability part, the ThunderX3 will effectively be unavailable. 

That leaves AWS providing the Graviton processor in the EC2 cloud server option, or Ampere with their current generation eMag Arm Server, and forthcoming Ampere Altra SoC as the only server-class Arm processors left (for now).  The Ampere Altra is brand new, and available from our friends at Packet in an Early Access Program, but no specific General Availability date has been mentioned quite yet.  This processor offers 80-cores or 128-cores, and is based on Arm Neoverse N1 cores. 

There is another processor on the horizon though from Nuvia, a startup formed late last year who is designing an Arm-based server class SoC.  Nuvia has said it will take several years to bring their processor to market, which is a typical timeframe for an all-new custom processor design.  So in the meantime, only Amazon and Ampere are left in the market.

The NXP desktop-class LS2160 as found in the SolidRun Honeycomb could also be considered for some workloads, but it is a 16-core part based on A72 cores.

There is one other Arm Server that exists, but unfortunately it’s not able to be acquired outside of China:  the Huawei TaiShan 2280 based on the HiSilicon Kunpeng 920.  This is a datacenter part that is likely used by the large cloud providers in China, but seems difficult (or impossible) to obtain otherwise.  It is a dual processor server, with 64-cores in each processor, thus totaling 128 cores per server.

As usual, the Arm Server ecosystem moves quickly, and we look forward to seeing what’s new and exciting in our next update!

 

Posted on 3 Comments

Where to Buy an Arm Server

Being Arm enthusiasts and deeply embedded in the Arm Server ecosystem, one of the questions we get asked often is,

“Where can I buy an Arm Server?”

In the past, it was difficult to actually find Arm Server hardware available to individual end-users. Not long ago, the only way to gain access to Arm Servers was to have NDA’s with major OEM’s or having the right connections to get engineering-sample hardware. However, over the course of the past 2 to 3 years, more providers have entered the market and hardware is now readily available to end users and customers. Here are some of the easiest ways to buy an Arm Server, although this list is not exhaustive. These servers all have great performance and are well supported thanks to standards compliance and UEFI.

First up is the Marvell ThunderX, and newer ThunderX2. These chips are sold in servers from several vendors, which come in various shapes and sizes. Some of the examples we’ve found include the Avantek R-series in both 1U and 2U sizes, and the Gigabyte Arm offering that closely match Avantek’s specs. There are High Density designs, single processor and dual processor options, and 10 GBE as well as SFP options available.  ThunderX2’s have been more popular in HPC environments, but even a first-generation ThunderX is a great choice, and still a very powerful machine.  They can be purchased with up to 48-cores, or in dual-processor configurations then containing up to 96 cores.

Another option is the Ampere eMag Arm Server from a company that formed a few  years ago, Ampere Computing.  They ship a turnkey Arm Server that is sold by Lenovo, the HR330A or the HR350A.  Their first-generation platform has 32 Arm cores running at 3.0ghz, 42 lanes of PCIe bandwidth, and 1 TB of memory capacity, and their second-generation product, the Ampere Altra, has up to 80 Arm Neoverse N1 cores.  Current models are available for purchase from their website, or through Lenovo.

Finally, although it is marketed as a workstation, the Solid Run Honeycomb LX2 motherboard can quite easily be repurposed as a proper server.  With 16x A72 cores, support for 64gb of RAM, up to 40gb Ethernet, and PCIe expansion, it can definitely handle medium sized workloads.  It is standards-compliant, making it easy to install your OS of choice, and affordable, thus its a great option for getting started on Arm.

And of course, if buying physical servers and hosting them yourself, or placing them in a datacenter, is not feasible or cost effective in your situation, then our hosted Arm servers are a great option as well!  Our miniNodes Arm servers are certainly more modest in comparison to those mentioned above, but, they are a great way to get started with Arm development, testing existing code for compatibility, or lighter workloads that don’t require quite so much compute capability.

Be sure to check back often for all things Arm Server related!

Posted on Leave a comment

Arm Server Update, Winter 2020 Edition

In the months since our last update, as usual, much has changed in the Arm Server ecosystem!  When assessing the industry and product performance in such an emerging field, things move fast!  Here are few observations and notes on the second half of 2019, and a look ahead to what to is forthcoming in 2020 for Arm Servers.

First, Marvell has continued to focus on the HPC market, and has promoted their ThunderX2 processor in talks, marketing materials, and social media posts focused on their National Laboratory projects and installations.  There is also some preliminary talk about their next generation product, the ThunderX3, though details are limited at the time of this writing.  In a sign of confidence, Arm directly invested a significant sum of money in the Marvell Arm Server processor as well.

Meanwhile, Ampere has had continued success with their eMag processor, including server sales through Lenovo, and a Workstation version of the platform now available as well.  Similar to Marvell, Ampere has begun to discuss their next generation processor as well, stating that it will have 80 cores, and will be based on the Arm Neoverse N1 reference architecture.  Again similar to Marvell, Arm has also invested directly in Ampere to keep momentum and product development strong, and more recently Oracle has invested as well.

Amazon has been competing strongly in the Arm Server market for the past year with their Graviton platform, which powers the AWS A1 Instance Type.  At the re:Invent conference, Amazon announced the Graviton2 processor, which will be available soon, and increases the core count to 64 and “deliver 7x performance, 4x the number of compute cores, 2x larger caches, and 5x faster memory compared to the first-generation Graviton processors” according to Amazon.

The last item to make note of is the SolidRun Honeycomb platform, which is technically marketed as a Developer Workstation, but could quite easily be adapted to a small server.  It offers a 16-core NXP Layerscape SoC, 4x 10gb Ethernet, SATA, PCIe, and a standard mini-ITX footprint.

As usual, if you have anything to add to the conversation, simply add your comments below, and we will continue to offer analysis and insight to all things Arm Servers!

Posted on 1 Comment

Hosted Raspberry Pi 4 Servers, Coming Soon

Can a Raspberry Pi 4 function as a server?

Since it’s launch in June, many people have been wondering whether a Raspberry Pi 4 can take on the role of a small server. With 2gb or 4gb of RAM now available on the boards, and a significantly faster Arm processor than the previous model, a Raspberry Pi 4 Server is absolutely possible!

Longtime followers will know that miniNodes has been hosting Arm Server single board computers for years, and may already realize that our current Raspberry Pi Servers are typically sold out. Demand for them has always exceeded our capacity (sorry about that!), and customers have realized that they function great as lightweight servers for certain use-cases. Testing code and applications on Arm processors for compatibility, running simple services that don’t need a lot of CPU / compute power, early IoT product development work, Edge node and gateway workload testing, and bare-metal (non-shared) access are some of the reasons users have been drawn to our hosted Raspberry Pi’s. However, the capability of the nodes is certainly modest with their 1gb of RAM and Cortex A53 cores.

Enter the Raspberry Pi 4 Server

The Raspberry Pi Foundation caught the world by surprise when it released the new Raspberry Pi 4 board a few months ago. As mentioned, it came with increased RAM, a faster processor, gigabit networking, and USB 3.0 ports. These upgrades directly addressed the performance and connectivity shortcomings of the previous generation, though “shortcomings” is probably unfair when talking about a computer the size of a credit card and costing only $35. However, with these new components, the Raspberry Pi 4 is a very capable machine, and more specifically with a quad-core Cortex-A72, 4gb of RAM, and USB3 attached storage it is certainly able to fulfill the role of a small server. Content and data caching, IoT data collection and aggregation, extreme-edge compute environments, small office branches, archival storage, email relaying, (small) in-memory NoSQL datasets, and many other purpose-fit workloads are entirely possible!

Raspberry-Pi-4-Server

As also mentioned, miniNodes has been involved in the Arm Server ecosystem for many years, and we have a lot of experience with the challenges of hosting single board computers. The biggest issue for most long-term Raspberry Pi users, and other similar SBC’s without onboard storage, is that SD Cards wear out and fail. SD Cards were never designed to have constant and continual reading and writing, such as occurs when a full operating system is installed and running on them. SD Card failure leads to the Pi crashing, with potential for data corruption and data loss. Another major hurdle when deploying Raspberry Pi’s in a cloud hosting environment is the lack of native management, remote console access, or other typical hardware control mechanisms found on traditional servers. In fact, even placing single board computers in a datacenter is a challenge, with voltage, physical dimensions, port placement, and cabling very different from a standard 1U or 2U server chassis.

But, with some creative engineering and subject matter expertise, miniNodes is going to address these challenges and build an infrastructure to allow the Raspberry Pi 4 and other single board computers to function in a cloud server capacity that is similar to the user experience developers are already familiar with.

(Small) Arm Cloud Servers

It will take a few months, and some trial and error along the way, but our current plan is to tackle each previously identified challenge and come up with a solution that is both scalable and customer-friendly. First and foremost, we need to move storage off of the SD Card, and onto a more robust medium. Thus, a new NAS system (also running on Arm!) is being developed, where each node’s OS will live, allowing higher performance, greater reliability, and data replication. Next, the ability to remotely power cycle nodes is being built, by a power distribution and relay that can change states upon user command. Cabling, cooling, and rack mounting are also under active development, with power, networking, and the physical layout of the boards in a 2U server chassis being optimized.

A lot has been written about the heat produced by the Pi 4, and placing lots of them in a server chassis is not going to help, so we are doing careful analysis of the thermal properties of the boards, and experimenting with heatsinks and fans, as well as the placement, direction, and airflow across the boards in the chassis. Internal temperature monitoring (and alerting) is also being developed for the chassis, to ensure intervention can occur if needed.

Here are some of our first experiments with cooling solutions:

[columns] [span4]

Raspberry-Pi-4-Server-1

[/span4][span4]

Raspberry-Pi-4-Server-2

[/span4][span4]

Raspberry-Pi-4-Server-3

[/span4][/columns]

Finally, a complete re-architecting of the miniNodes website and management portal is needed to interface with the new components, so a website rebuild and a new user interface will be rolled out as well.

These exciting changes will result in a better customer experience, improved products and services, and higher performing nodes. We’ll keep this blog updated with the latest information and developments (and lessons learned) as we continue to make progress on the project, but early estimates are that we will be ready for launch sometime in Q1 2020. As much as we’d like to offer our Raspberry Pi 4 Servers sooner, we want to make sure we do it right, and are able to give customers an improved service to go along with the improved boards.

Stay tuned, and, if there are other Arm devices you’d like to see included, just let us know. You can find us tweeting here https://twitter.com/mininodes, or, just shoot an email to info at mininodes.com.

Posted on Leave a comment

Arm Server Update, Summer 2019

It has been a while since our last Arm Server update, and as usual there has been a lot of changes, forward progress, and new developments in the ecosystem!

The enterprise Arm Server hardware is now mostly consolidated around the Cavium ThunderX2 and Ampere eMag products, available from Gigabyte, Avantek and Phoenics Electronics. Each can be purchased in 1U, 2U, and 4U configurations ready for the datacenter, and high performance developer workstations based on the same hardware are available, as well. Both of these solutions can be customized with additional RAM, storage, and networking, to best fit the intended workload.

Another option that exists, but is difficult to obtain in the United States, is the Huawei 1620, also known as the Kunpeng 920. These servers are also enterprise grade servers ready to be installed in a datacenter environment, typically in a 2U chassis with configurable memory and storage options. However, availability outside of Asia is limited, and new regulations may make importing them difficult.

While the Cavium, Ampere, and (potentially) Huawei servers are available as bare-metal options shipped directly to you for installation in your own datacenter, Amazon has also made significant progress over the past few months and is rapidly becoming the most popular Arm Server provider. They use their own Arm Server CPU called the Gravitron, that they use in their own proprietary AWS A1 ECS instances. This is quickly becoming the best way to deploy Arm Servers, as the entire system is in the Cloud and no hardware has to be purchased. They come in various sizes and price ranges, and experienced developers organizations who are familiar with the AWS system can simply pay by the hour for temporary workloads. For users who are less familiar with the ECS dashboard, less comfortable with the fluctuations in billing model, or prefer a fixed rate, we at miniNodes offer pre-configured Arm VPS servers in a range of sizes and prices, hosted atop the AWS platform.

Finally, the Edge of the network continues to be where a lot of innovation is occurring, and Arm Servers are a perfect fit for deplopyment as Edge Servers, due to their low power consumption, cost-effectiveness, and wide range of size and formats. The MacchiattoBin has been demonstrated running workloads in the base of windmills, the new SolidRun Clearfog ITX is promising to be a flexible solution, and the new Odroid N2 is an intersting device that has “enough” performance to satisfy a wide range of workloads that don’t need to always rely on the Cloud, and can instead deliver services and data to end-users (or other devices) faster by being located in closer proximity to where compute is needed.

As always, check back regularly for updates and Arm Server news, or follow us on Twitter where we share Arm related news on a daily basis!

Posted on Leave a comment

Arm Servers Need an Arm Desktop Unit of Computing

Finally.

We as an industry are finally having an open and honest discussion with ourselves, about why Arm Servers have not been as successful as we had all hoped, and why that is. Not excuses, just an analysis of facts and the consequences they have had.

To set the stage, let’s rewind 4 years. Arm had very high expectations for the Arm Server ecosystem, and made public statements on a regular basis that they intended to capture 20% of the Server market by 2020. We are now 9 months away from that date, and Arm Servers are closer to a 1% novelty still being explored and tested for workload compatibility by users.

In early 2015, the number was 20%, but by January of 2016 the number actually grew to 25%! Here is the opening statement from an article on the Next Platform at the time:

“So 2016 is the year, or at least it is supposed to be. The year when 64-bit ARM chips finally make their way into servers and perhaps start getting wheeled through the loading bays of actual datacenters to start running real workloads alongside the Xeon processors that by and large dominate in the glass house.”

And here are the 2015 claims:

That sentiment and the high hopes continued through the next several years, with Arm Servers always ‘just around the corner’, ‘ready to make a breakthrough’, or ‘poised to take marketshare away from Intel’. AMD entered and left the ARM Server SoC business. Qualcomm entered and left the Arm Server SoC business. Applied Micro was acquired and their Server SoC was abandoned, but fortunately came back via an acquisition of their IP. Either way, time and knowledge were lost.

That brings us to the present day.

Last week, Arm further revealed the E1 and N1 Neoverse processors based on the Ares architecture, which were previously announced last fall at ArmTechCon 2018. Performance numbers and the processor design surely look great, and the product announcement generated plenty of media attention, as has been the norm. The dialogue has shifted, however, as the media has once again seen a new product announcement, but has begun to wonder when these Arm Servers will actually be competetive and ship in any real volumes. This conversation then exploded a few days later, when Linus Torvalds weighed in on the topic, claiming that Arm has already lost, due to the fact that developers and IT teams (for the most part) build and develop on x86 PC’s and workstations, then deploy to the organization’s own servers or the cloud providers. The natural tendency then, is to purchase x86 servers, to match the architecure and eliminate any chance of hardware incompatibility. The x86 PC is, more or less, just a smaller and cheaper version of the exact same server hardware running in the datacenter. Software stacks might vary between local and cloud, but if you can at least minimize differences and just focus on any bugs that may arise due to software variances, that is still better than pushing code up to servers and having to troubleshoot both software AND hardware variances. Thus, the ultimate takeway in Linus’ view, is that until there are cheap, standards-compliant Arm desktop PC’s that developers can use locally, Arm Servers will not stand a chance in the datacenter.

And he is right.

Arm Server enthusiasts have been complaining about this for years, and the product void and differences between the ultra-small and ultra-cheap Arm single board computers (think, Raspberry Pi and similar) and the big, enterprise Arm Servers is too vast. The missing piece in the ecosystem has been a “PC-like” system, that is standards compliant with SBSA and SBBR, and essentially mimics the entire user experience of the x86 PC platforms. This means a normal UEFI boot up and BIOS / GUI that allows user configuration, and a hardware design that allows end users to add whatever hard drives and SSD’s they’d like, choose their brand and size of memory, add in whatever graphics card they prefer, put it all together, and it “just works”. Then, they can install whatever (ARM64) OS they would like, such as RedHat, CentOS, Fedora, Debian, Ubuntu, Suse, etc. Again, it “just works”. The end result is a very “PC-like” experience, and a fully operational Arm desktop computer.

No custom kernels, no board-specific software, no vendor-provided code, no bugs, quirks, hacks, odd shapes and sizes, no fighting with the system, no frustration, and no time spent troublseshooting why the system doesn’t boot or hardware doesn’t work.

Once this is solved, and developers can easily build and test applications and code on an Arm Desktop PC locally, then Arm Servers become a more attractive target to deploy those apps to. Without an Arm Desktop PC, Arm Servers are just a novelty that individuals will continue to test, evaluate, and ultimately discard as not worth the switching costs, as has occured for the past 5 years.

Fortunately, Linus’ timing was excellent, as he spurred this debate just in advance of Linaro Connect, where miniNodes, Packet.net, and Linaro are delivering a joint session titled “Designing a next generation ARM Developer Platform“.  In this talk, we will be discussing this problem in detail, hopefully spurring ideas, investment, and starting the process of correcting the problem and getting a small “NUC-like” Arm product engineered and built. The event is streamed live and recorded, so, tune in and watch if you can!

Hopefully, this is the tipping point, and industry change occurs as a result!

Posted on Leave a comment

How to Install Ubuntu Arm Server on the Raspberry Pi Compute Module 3

A few weeks ago, the Ubuntu team released a pre-built 64-bit Ubuntu Arm Server Raspberry Pi image that can be downloaded and flashed to an SD Card, that is compatible with both the Raspberry Pi 3B and Raspberry Pi 3B+ single board computers. As we documented in our original article detailing the new Ubuntu build, in the past you needed to build a kernel, create a root filesystem, and then install the necessary firmware and drivers. But now with this new ready-made image, there is no longer a need for any of those difficult and time consuming tasks. While the image was intended to be run on standard Raspberry Pi 3B and 3B+ hardware, with some small modifications it can be installed and run on the Raspberry Pi Compute Module 3 as well.

First and foremost, you will need to start with the new 64-bit Raspberry Pi 3 Ubuntu Arm Server image, which can be downloaded here: http://cdimage.ubuntu.com/releases/18.04/

Once downloaded, you will need to unzip / extract the image file from the compressed archive file.

Next, using a Raspberry Pi Compute Module IO Board or Waveshare Compute Module IO Board Plus, you will need to flash the image file to the Compute Module 3’s onboard eMMC. Instructions for that process can be found here: https://www.raspberrypi.org/documentation/hardware/computemodule/cm-emmc-flashing.md

After the flash process is complete, there should be 2 partitions on the eMMC, ‘boot’ and ‘system’. Mount the ‘boot’ partition of the eMMC so that you can view and edit the files on it.

The first change to be made is to the ‘config.txt’ file. Open it up and change the kernel line, add an initramfs, add an arm_control line, and comment out the device tree address as such:

kernel=vmlinuz
initramfs initrd.img followkernel
arm_control=0x200
#device_tree_address=0x02000000

Save and exit.

While the partition is still mounted, you need to add an additional file to the top level directory of the partition as well. In this ‘boot’ partition, you will notice there are .dtb files for the Raspberry Pi 3B. But since we are adapting this Ubuntu image for the Compute Module 3, we need to add the CM3’s .dtb file here as well. A copy of the Compute Module 3’s .dtb can be extracted from a stock Raspbian image, but for convenience a copy can be downloaded from the Raspberry Pi GitHub here: https://github.com/raspberrypi/firmware/blob/master/boot/bcm2710-rpi-cm3.dtb

Simply download it, then copy it to the mounted ‘boot’ partition.

At this point, all necessary changes are complete, and it’s time to boot up and check our work! Unmount the ‘boot’ partition, power down the Compute Module, and then change the IO Board to standard boot mode via it’s jumper setting. Reapply power, and the boot process should begin! The first boot takes a few minutes, as cloud-init runs a series of one-time setup processes to resize the rootFS, setup networking, generate SSH keys, create a container environment, and other tasks. But, after a few minutes you should be able to login to your new 64-bit Ubuntu Arm Server for Raspberry Pi Compute Module with a default username and password of ‘ubuntu’ via SSH or a console!