GPT for RPi images?

The current stable version of the RPi bootloader supports GPT. I have tested and it seems to work fine. Larger SDs and SSDs would benefit from using GPT… is it too radical of a change to consider switching?

I just converted my SSD to use GPT. I was using LVM2 to manage space. I only have 4 partitions at the moment, but I will have more thanks to GPT support.

$ fdisk -l /dev/sda

Disk /dev/sda: 465.66 GiB, 500003004416 bytes, 976568368 sectors
Disk model:                 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: E8AD7AFC-9586-4D63-B9F7-1FBA0B0FEE12

Device        Start       End  Sectors  Size Type
/dev/sda1      2048    534527   532480  260M Microsoft basic data
/dev/sda2    534528   8931327  8396800    4G Linux swap
/dev/sda3   8931328  70371327 61440000 29.3G Linux filesystem
/dev/sda4  70371328 131811327 61440000 29.3G Linux filesystem

I believe this is quite experimental right now – personally, I had some issues multi-booting with the GPT on the Raspberry Pi 4, but I can’t say nothing about the single-boot enviroment. Right now I’ve ended up with the MBR just because of that.

I remember I’ve tried to setup the GPT partition table to work with the PINN and the Manjaro – unfortunelly it didn’t seemed to be possible. As RPi 4B is actually suitable to boot the GPT partition scheme, the PINN booted up correctly – but after I’ve tried to switch to the Manjaro through the PINN, it failed to boot it. I’m still unsure if that were the problem with the Manjaro itself, or the PINN weren’t suitable for doing that.

I believe I’ve tried that with the Manjaro 20.06 or 20.08. From this time there were a lot of changes with the Manjaro.

And I wanted to use the GPT with a hope of using my HDD both on the X86 UEFI machine and on the Raspberry Pi 4.

I’m sure you can have them more with the MBR by changing the type of the one of them to the extended partition to use it with the multiple logical partitions – but personally I’ve had a bad experience with them, as they have some limits that the GPT does not. I’ve also liked that with the GPT you could give the partitions their names (MBR supports only the labeling that does not work for the “cleared” / unknown partition types).

I am sure the move to GPT will happen. Once the UEFI firmware is complete, grub will be a good option.

And GPT support makes ZFS a possibility too.

You’re probably right for GPT and UEFI in a long term view because RPI Team are a bit conservative.
My question is : why would you use ZFS instead of other modern FS (XFS/BTRFS) ?

When other distro’s consider it to be mature enough and start making the move with their images then we will probably do so also.

I’m sorry if you felt you were being targeted but I was talking about guys from the Raspberry Team.

No problem no offense taken; just making a statement as we want to make our images as stable as possible and do not venture out until things are stable.

Also, we use the same mechanism to make all our images, not just the Raspberry Pi ones. And not all our supported devices support UEFI/GPT…

1 Like

I am a Linux guy first and a RPi4 fan second. So I am interested in using RPi4 in my Linux projects and experiments. I have used ZFS in a production environment for many years and it has proven itself to be superior to any Linux native filesystem. However, I do not yet know if it can be a useful filesystem on an RPi4. But I look forward to trying it.

Edit: I am currently using F2FS and I have had zero issues. I will likely stick with it for day to day use.

I guess you tried it on Solaris Systems, didn’t you ?
I agree it’s really performant but it’s not for noobs and far from this distro paradigm I guess. But you could use it anyway because it’s based on Archlinux and you can set to all your needs.
BTW, from what I understand, zfs is not as well integrated/implemented as on Solaris or BSD and Linus was not a real fan of it until it goes clearly to GPL (at the begining of the year): Real World Technologies - Forums - Thread: Nuances related to Spinlock implementation and the Linux Scheduler
BUT, I’ve just seen this ( 6 days old) : Release OpenZFS 2.0.0 · openzfs/zfs · GitHub
This may be the beginning of the change you’ve been waiting for.

Yes, my experience began with ZFS on Solaris and then with the ZOL project.

I do not know if Linus is ignorant of ZFS or if he is speaking in context of his fear of Oracle’s attorneys. Uduntu is all-in for ZFS on the x86_64 platform. I suspect the Google vs Oracle case over Android Java APIs, which is currently before the US Supreme Court, may have much to say about the future of ZFS on Linux.

Don’t mess with Oracle’s attorneys (I’m a former Sun and laterly Oracle employee) and remember the SCO trial against Linux…
Many production server are running Linux and use ext4 or xfs or btrfs. They are not so irrelevant as filesystems… zfs is not the messiah : A Quick Look At EXT4 vs. ZFS Performance On Ubuntu 19.10 With An NVMe SSD - Phoronix (maybe a little bit old but I don’t have another bench)
and this page : ZFS on Linux with all flash? |

ZFS is a heavy load on a system compared to other filesystems. And if running lean or speed is your main concern, then ZFS is not the best choice. But with zvols, snapshots, send/receive, exporting/importing, and resilience, make it more than a basic filesystem. BTRFS comes closest, but currently I will not place it in production or personal use.

the main drawback according to me is that is not well integrated on Linux :

(…) compiling a kernel module every time the kernel is updated (as with ZFS on Linux), or using annoying Solaris compatibility layers, or to run with an external and un-official and often out-of-sync repository (Arch Linux developers, yes I’m talking to you! We need ZFS in the Arch Linux official repositories ASAP! Even Debian has ZFS in its repositories!).

Take a look at the packages on arch : AUR (en) - zfs-linux
and it’s PKGBUILD : PKGBUILD - aur.git - AUR Package Repositories

This PKGBUILD was generated by the archzfs build scripts located at
The archzfs packages are kernel modules, so these PKGBUILDS will only work with the kernel package they target. In this case, the archzfs-linux packages will only work with the default linux package! To have a single PKGBUILD target many kernels would make for a cluttered PKGBUILD!
If you have a custom kernel, you will need to change things in the PKGBUILDS. If you would like to have AUR or archzfs repo packages for your favorite kernel package built using the archzfs build tools, submit a request in the Issue tracker on the archzfs github page.

The end result would be a real labyrinth to many people.

Yes, it can be painful. I use dkms and live with the rebuilds.

Have you looked at what Ubuntu is doing with ZFS… zsys? It is quite well integrated and I am sure it will continue to improve, if Oracle does not pull the rug. There are many packages in the AUR for better ZFS integration.

I can see the kind of goal you’d like to achieve but I’m afraid you’re chasing chimeras. It’s really too homemade and hypothetical with our type of material to go “into production” especially with the Sword of Damocles of a trial by Oracle that can stop everything altogether.

of my experience with so many hypotheses to verify often a new challenger wins the bet, not necessarily the best one technically but the one that answers a maximum of points and is easier to integrate.

have you watched bcachefs ? I haven’t looked at plans to integrate new fs into Linux in the medium term, maybe it’s a mirage too.

Some new filesystem may come along, just as f2fs now handles my RPi4 needs… however, it can’t do lustre.

f2fs can be a good option for our architecture. AFAIK, it’s not already available, is it ?

Yes, it is available and compression is supported in the Manjaro RPi kernels thanks to @Darksky. I use it for my root filesystem and I use it with compression for /home (separate volume).

Edit: f2fs also supports compression on a directory basis, although I have not tested this.

Edit 2: My interest in compression is not for saving space, it is for reduced disk i/o… trying to reduce wear on my SD and SSD.