ARM vs x86 References?

I’m a relative n00b wen it comes to running linux on ARM architecture. I’ve run (and supported) linux on x86 workstations and servers for ~20 years.

I know that running linux on ARM is probably around 80 percent (or more) the same as running on x86, but the last 10-20 percent is something I feel I need to become more knowledgeable about.

Does anyone know any good documentation that talks specifically to the differences in linux on x86 vs ARM? I’m thinking there are a lot of things that would be different, especially at the HAL level, fr example…

George

Hi,

Look here

Welcome to the party. It’s essentially like trying to get linux to run on proprietary laptop hardware in the early 2000s. There’s a few that work 100%, and the rest are kinda a work in progress. I’ve just recently broken into arm and I haven’t really found any good guides anywhere to the differences, but I’d say it’s essentially 80% the same, as you said. The differences aren’t really as pronounced as you might think, but you’re right, HAL is a key one.

The thing that is most striking is the boot process and the concept of overlays.

https://source.android.com/devices/architecture/dto

Imagine if you will that the hardware on your device is not plug and play and you can’t just auto detect things on the pci or usb bus and go “oh, that’s device aaaa:bbbb, lets go search for support”. While I’m no expert, essentially you need, as part of your boot process, access to the right overlay files (and any config files to tweak their settings for your specific boot loader) to complete the device tree that will be available for the kernel once loaded to know what devices are available and need drivers.

That explanation precludes the existence of the bootloader, which is usually provided by the device manufacturer, but there are other ways to boot the device such as projects that provide UEFI style bios similar to x86. Every device is a little different, so I’ll describe the process on my Raspberry Pi 4b (which honestly is one, if not the best, supported devices along with the Pinebook Pro.)

The way the pi works is the SoC (gpu, or cpu, honestly, depending on board version) will consult its eeprom for the lowest level possible configuration variables. On the PI, this includes the boot order and any other low level things like USB tweaks, tftp settings, etc. It then uses the boot order much like an x86 UEFI filesystem search to find it’s boot code on some fat filesystem available in the (I believe) the first partition of each device in the boot order eeprom var. This “EFI like” partition is actually referred to as the firmware partition or boot partition and contains the device tree overlay files, the boot loader elf entry point files, the boot loader configuration, the linux kernel, and the linux initrd. For the pi, there are two main config files: config.txt that configures the hardware and overlays and cmdline.txt which configures the arguments to the linux kernel. Indeed, the RPI boot loader is literally a “linux kernel starter” program, as the Raspberry Pi Foundation designed the device to boot linux. The boot loader takes the place of grub. You can think of it as “the bios and grub combined”. Other devices are definitely different, and there are various projects that are the “defacto sandard” way to boot linux on those devices, but us Pi Peeps are definitely lucky because the RPI foundation decided linux was their thing and so it’s comparatively a snap to support.

On other devices, which I will admit I have no experience on, but am actively gaining as I consider buying a Odroid N2+ and others, follow a similar trend, especially when it comes to overlays, but there are differences in boot procedure.

One thing that seems to be quite common is that instead of using X86 style “removable media installer that installs onto target persistent storage,” you’ll find a trend that installers will generate images or you download pre-built images of your “installed os” that will configure itself on first boot. The theme among these is as follows:

  • Boot up
  • Add non root user with sudo capabilities and right user groups
  • resize filesystem to max size
  • start user environment.

It would be up to you to partition another device such as emmc or a usb ssd and transfer the resulting set of LABELED PARTITIONS or UUID identified partitions to another device (including resizing) and setup the boot loader to find and use them. This is how MJ on RPI worked, and honestly every other linux I used on it. Devices like pinebook pro and others that have built in EMMC module support may offer the ability to deploy on a target device like that, but it’s hard for me to speak to it with no experience outside my Pi.

On Pi, one of the biggest lessons is to get a good SD CARD as it can make or break your environment. There are numerous benchmarks available (and I’ve done a lot myself) and the key take away is this: Samsung EVO Plus or SELECT is incredible, and SanDisk Extreme are pretty good too, just don’t buy into that A2/A1 rating nonsense as it requires a different device/driver setup that is pointless at this point, and some of the A2 cards are actually slower than their A1 counterparts in our card readers. Also, don’t assume that the USB SSD you’re thinking of getting will just work, devices can be far more picky than you’d think in arm land.

As for software, ARM, while the platform is decidedly “more closed” than x86 is now days, it will shine a bright light on what software and services are closed too, such as video providers that want DRM, or programs that are closed source and you simply cannot run as there is no ARM binary (at least not arm linux!). The quintessential examples of this are widevine, discord, and pandora in my case.

Perhaps the most important thing to realize is that while the Apple M1 chip is x86 fast, none of the ones you will have linux support on today are. Just prepare yourself. Linux on arm has historically been for embedded applications and not desktop applications. Due to the closed nature of arm device manufacturers, with some clear exceptions, you are years behind what just works with a vendor supplied android load. This translates into the following truth at the beginning of 2021: Your arm linux experience will be decidedly slower in all aspects than anything x86. It’s currently a labor of early adopter love, although arm linux has been around for a long time, the new RPI 4b, pinebook pro, and apple M1 just, in my oponion, just injected some steroids right into the whole linux on arm concept. Apple showed us the way, the RPI 400 “arm computer in a keyboard” exists, the pinebook pro exists, and there are a bunch of existing cromebook and windows arm notebooks and it’s clear that this is the way forward and the market will never be the same thanks to Apple’s actions.

2020 and previous: So you wanna do some embedded apps?
2021 on onward: Gimme supported desktop now!

I literally have numbered sd cards all over my desk from all the different OS loads I’ve been trying :wink: The RPI isn’t going to be my only arm device. I’m here for the PANICs :wink:

Wayne

3 Likes

Interesting article. While it’s mostly accurate (I am familiar with RISC architectures), it does seem to be a bit opinionated in a few places. :slight_smile:

Thanks…it’s definitely a good, quick refresher on RISC vs CISC.

I was trying to get more at the differences of linux implementation on ARM, as opposed to linux on x86… It looks like @Razathorn has a more extensive answer on this point.

Georg

1 Like

First things first: thanks for this explanation. It’s really quite thorough.

Yes, that is what I was getting at regarding the HAL, I could tell from my experience with getting the Wifi & BT drivers loading correctly on this Pi 400 that it was definitely a different style system. Ugh - not really a fan of the DTB/DTO style system, but it makes sense from an embedded implementation standpoint… Using something like modprobe to cycle through a long list of drivers to load what’s needed would be inefficient when the drivers needed are actually pre-determined by the device.

And even UEFI is very different from the older x86 BIOS standard that just loaded the first sector from a boot device and handed off execution.

In the data center there has been a growing trend towards using pre-built images for quite some time. It’s been seen as easier to have some base image that gets copied on to boot media, and then application specific configurations applied as an automated process… And, of course, we also have whole images with pre-configured applications (ie, docker images). So this isn’t a surprising trend. :grin:

Yeah, this is also how the Pi CM modules work… Flash an image to he EMMC on the card, and on first boot it configures itself.

My approach to this has been a bit f an evolving process. My primary desktop system went down months ago. I can fix it, but I realized it was out of cycle with current hardware – especially with the new generation of AMD CPU’s and Nvidia / AMD GPU’s coming out. So, I decided to sit back and wait to see what was coming, meanwhile I bought a laptop to get me through my bigger needs.

But, my thinking on things has been evolving quite a bit over the past 6 or so months… I’ve been looking at the power consumption on newer systems: 800 watts to drive a system with the new Nvidia GPUs? Sheesh, that’s a big waste of power to run everyday as a work system.

So, given that I tend to follow a more unix-like work environment (use one tool for each individual task, then integrate all the tools into a tool chain), I have more flexibility for my environment. A low powered front-end system is fine for most of the interactive work, while the heavier parts can e handed off to another system (I’m considering a compute farm). And having a higher-spec’ed system for things that really require it, like video editing.

Anyway, there’s tons more I could explain about my thoughts on all of this, but the point is: speed isn’t a panacea. It’s needed for some things, but it comes at a big price… Taking an approach that mixes / scales devices in a way that fits my needs is where I’m going with this. the Pi 400 is just the first step in exploring these thoughts, I could end up some place completely different in the end.

George

This is exactly where my thoughts go on this, it seems we are of similar mind and background. The world is rapidly changing, and “moar power!” is fading in the light of global concerns. This is what gives ARM the momentum and people’s attitudes and concerns open the door. I don’t think it will stop, a whole generation is growing up with ARM in their pockets, on their wrists and now on the desktop… x86 days are numbered.

I don’t completely agree with this sentiment… What is happening (and has been for quite some time) is that the role of computers on the whole are being re-thought collectively. The issue has become that there are quite a few different roles to fulfill, and the requirements for those roles are varying in ways that are difficult to predict.

Some people are fine with just a phone and/or a tablet device for communication and media consumption. Others need something with more horsepower for working on larger projects, (like creators needing to ingest and edit 4K or 8K video) and need a high end laptop or desktop type system. Then there is the business side of things where having a densely populated data center offers a whole different range of challenges from CPU performance, heating / cooling, storage capacity, backups, redundancy (on several levels) and power consumption are all measured a lot more than in the average home environment.

RISC processors are finding their way back into things because of a number of properties the architecture present (most of which I believe involve a level of predictability that is easier to work with). But I don’t think ARM / RISC based systems are going to replace x86 completely – instead it will make inroads into specific areas (like the data center), while being a compliment to x86 in other places.

I’ve been looking at a lot of “micro” computers lately – basically they are laptops without the keyboard, display and battery. They tend to have more I/O, and more expandable storage than laptops – while still keeping power consumption down to 10-25 watts. All of this in a 1L or smaller case. They are finding themselves into all sorts of roles now from HTPCs, to edge servers (ala chick-fil-et using Intel NUCs in their shops).

Basically, we’re in a time where the role and forms for computer implementations are changing. However, that doesn’t mean we’re going to move off of the dominant architecture, it just means we are going to see more diversity. In fact, I wouldn’t be surprised to see some new architecture evolving in the next 5-10 years as of this process – I think we are overdue.

George