How to install Manjaro on AMD raid (or mdadm)

Hello everybody,

I just setup a new computer in order to do a fresh installation of Manjaro…
The processor is an AMD TR 1950X and the motherboard is an ASRock X399 Taichi.

This motherboard allows to setup RAID array. I successfully managed to setup the array (I had 3 SSD of 500Gb) and it appear in the bios as a 1.5Tb RAID0.

However, once I’m on the live installation and I want to install Manjaro, it displays each drive individually… So I guess the RAID is not working, right?

What I need to do in order to make it recognized as a single drive? ASRock only give the driver to use in the installation process on windows…

I read a lot of posts where people simply tell to give up or at least to go with mdadm.
But I find this option somehow sad, how the heck it’s not possible to do that?
That said I’m okay to go with the mdadm option but I’m lost too…

Can anyone explain me what to do?

The only resources I find only explain how to setup a RAID device but not from a live USB and for the installation of Linux.

If it is a fakeraid (normally made with a onboard chip) then you need to initalize the raid. You will need to use dmraid instead of mdadm.

You will need to wipe th signatures of the ssds before with wipefs. Then you are able to use them as single ssds. (make backups)

Here is a little guide how to do it on archlinux and arch-based Distros:"fake_RAID"

But anyway. Even for Fakeraid, you will need to use software raid (mdadm) first.

1 Like

Thanks for the answer!

The thing I still didn’t get is where do I need to “setup” the Raid?

In my case I still have nothing on the SSD. They’re blank. I don’t have any OS installed yet.

AMD and Asrock (motherboard vendor) explain how to do it as follow (on windows): once the RAID array is set up on the BIOS (which I did) you prepare the driver (only available for Windows) in a USB stick and upon the windows installation you load that driver, then the installation utility will see the RAID array…

So I mean in this case it’s done upon the installation… If I would compare, it’s like when I’m still on the live USB of Manjaro… So should I run those commands there?

Or do I need to install Manjaro on one of them and then it will “build” the array in “itself”?

Also last thing, why mdadm is still needed if there’s already the onboard RAID? (asking by curiosity… At the end of the day I’ll simply do what’s necessary)

Sorry again if those questions are dumb…

And thank you very much.


in short:

  1. Create your Raid by your UEFI/BIOS
  2. Boot up manjaro live session and run:
sudo dmraid -l

This will list and show if your mainboard chipset is supported. If nothing is listed, then there is no native support.

This tutorial is german, but very detailed. Maybe google-translate/ could help you there. It is written for debian and the tool does not change since then. So, should work:

1 Like

Hello thank you again…
This command sudo dmraid -l displays the following:

asr     : Adaptec HostRAID ASR (0,1,10)
ddf1    : SNIA DDF1 (0,1,4,5,linear)
hpt37x  : Highpoint HPT37X (S,0,1,10,01)
hpt45x  : Highpoint HPT45X (S,0,1,10)
isw     : Intel Software RAID (0,1,5,01)
jmicron : JMicron ATARAID (S,0,1)
lsi     : LSI Logic MegaRAID (0,1,10)
nvidia  : NVidia RAID (S,0,1,10,5)
pdc     : Promise FastTrack (S,0,1,10)
sil     : Silicon Image(tm) Medley(tm) (0,1,10)
via     : VIA Software RAID (S,0,1,10)
dos     : DOS partitions on SW RAIDs

But what’s weird is that this is the exact same output my current PC (also running Manjaro - but it has an intel E3 xeon).
So I guess it’s some kind of default displayed by Manjaro itself?

But the following dmraid -ay displays no block devices found on my new PC that has the RAID enabled (while the old one that has not RAID displays no raid disks

So I guess that the RAID is somehow recognized, which might be a good starting point, right?

But then I’m stuck here because I don’t know what to do… When I looked for that error on google, I find different things to do… Some people even say to disable RAID on the bios and go back to AHCI?

I also find the following comment on one forum…

If you’re running linux, you probably would be better off with one of the variety of software raid implementations or ZFS.
“Fake raid” is only really of use on shitty platforms like windows that don’t have decent software raid implementations inside of the OS.

I might go this way and only do a software RAID (I want RAID0)…
But here on his first sentence why does he say or zfs?

So I did some research and TIL that zfs is a really powerful FS that allows RAID in itself…
So I might go by simply installing Manjaro with root on ZFS.
It seems the way to go is to use Manjaro Architect…

So I tried by running the manjaro-architect TUI that I have on my Gnome Live ISO…
But when I go to 7 - ZFS (optional) and try to do Automatically configure I get a message about zpool creation: Operation cancelled
So when I try to do the Manual configuration and using the first option Create a new zpool
Nothing happens… The cursor blink and comes back…
And it’s the same for the other options…

But I guess I’m on the correct path using Manjaro Architect…
I’ll try to search why it’s not working…

@sanjibukai Honestly saying yes… fakeraid has in general only windows support. On Linux there is no official support. So better way to go is software raid, zfs raid or btrfs raid.

I myself never used zfs, but i have really good expiriences with btrfs raid0/1. You can add and remove devices in the fly, you don’t need an extra module, and it just works. The downside is maybe that it costs a lot of cpu power when adding new devices when they are not empty.

I used this to create a btrfs raid:

In general you can create a raid0 with:

mkfs.btrfs -m raid0 -d raid0 /dev/sda1 /dev/sdb1 /dev/sdc1

Keep sure you added btrfs to HOOKS= after udev. So that linux detects the raid on startup.

So on Manjaro installation you will need to:

  1. Create partitions on every ssd.
  2. Create the btrfs filesystem with raid0 option.
  3. Mount it with the UUID like: sudo mount -t btrfs -U <UUID> /mnt/, because all 3 devices will share one UUID.
  4. Create subvolumes as you need it, like btrfs subvolume create @ /mnt for root and btrfs subvolume create @home /mnt for home.
  5. Then umount it and mount the subvolumes: sudo mount -t btrfs -o subvol=@ -U <UUID> /mnt and sudo mount -t btrfs -o subvol=@home -U <UUID> /mnt/home

Then you should ready to go for the installation.

1 Like

Thank you very much…
In the meantime I tried to install it using zfs (tried some manual setup, manjaro-architect)… With no success…
Then tried with mdadm… Same, I had a lot of trouble… When I managed to install it, then I had a grub error…
Finally I managed to install how I want using mdadm with this very same setup explained on that video Manjaro Architect UEFI+Bootable RAID Tutorial - YouTube
It worked!
However, I don’t know why my computer freeze at startup… Once the mouse cursor appears, the display freezes without showing me the login UI… I discovered that by pressing the power button of my computer it seems to go in standby since the led is blinking and because if I press it again it comes back to the login screen, this time displaying all the login screen but the mouse still being frozen.
But if I unplug and replug my mouse now everything works…
This is weird…
I tried with both nvidia drivers (since I have a 1070TI) and the free driver…

I’ll restart over trying with btrfs!
But just to be sure…

For simplicity sake, if I only want one root partition, I still need to partition each drive with two partitions? One of some hundreds of Mb for the boot and the rest for the linux root?

Also since this possibility, is it possible to install manjaro on only one drive…
And then after the installation (without even bothering with the live USB) make or add the other two disks to the raid?

Thank you very much for your support BTW!

Yes. If you have a BIOS and use the MS-DOS Partitiontable then only one partition is needed. If it is a efi, then only one efi partition is enough since it is a raid0 and not raid1.

So the best way is to create it like this as example with sudo cfdisk or even gparted:

/dev/sda1 → efi
/dev/sda2 → btrfs partition
/dev/sdb1 → btrfs partition
/dev/sdc1 → btrfs partition

And install Manjaro on subvolumes within this partition.

Sure this also possible. Just install it on one drive with btrfs and expand the btrfs partition to the other SSDs. It will be then more a Jbod. Well at least i would recommend to mirror the metadata which contains the checksums and strip the data only, which can only be done with mkfs.btrfs command as i know. But you can convert a raid0 to raid1 really easy on the fly.

1 Like

Well I tried many things and I was still unable to do it.
In the meantime I found this very good resources Manjaro Linux with btrfs-luks full disk encryption including /boot and auto-snapshots with Timeshift (in-progress) | Willi Mutschler
In that one he only did luks encryption and btrfs (I mean without btrfs RAID)…
But he has other tutorial with luks and btrfs RAID1 (but for Pop OS, not manjaro and not RAID0)… However, I did managed to complete that tutorial using Pop OS and everything worked…
I tried to reproduce the same steps on Manjaro (trying to adapt with RAID0 too) but the problem is that I’m not able to choose correctly my partitions.

I am able to successfully create the btrfs volumes and subvolume since I got:

>~ sudo btrfs -f -m raid0 -d raid0 /dev/mapper/data0-lv0 /dev/mapper/data1-lv1 /dev/mapper/data2-lv2
Label:              (null)
UUID:               bb11a60c-cb70-4abcd-b591-1f8695321234
Node size:          16384
Sector size:        4096
Filesystem size:    1.36TiB
Block group profiles:
  Data:             RAID0             3.00GiB
  Metadata:         RAID0          1023.94MiB
  System:           RAID0            15.94MiB
SSD detected:       yes
Incompat features:  extref, skinny-metadata
Runtime features:   
Checksum:           crc32c
Number of devices:  3
   ID        SIZE  PATH
    1   465.26GiB  /dev/mapper/data0-lv0
    2   465.26GiB  /dev/mapper/data1-lv1
    3   465.26GiB  /dev/mapper/data2-lv2


>~ sudo btrfs subvolume list /mnt                                                                                                                                                                     
ID 258 gen 15 top level 5 path @
ID 259 gen 17 top level 5 path @home

The problem where I’m stuck is that with the GUI installer, I saw each of the LV volume as BTRFS volume but I saw three of them (not one big as it was the case with mdadm).
And with manjaro-architect TUI, it’s even worse because in the Mount Partitions step, I don’t even see the Logical Volumes…

While Pop OS GUI installer allows to choose arbitrary partitions from arbitrary disks (and unencrypting them on the fly - it was very neat!)

I concede that bringing Encryption is not easier… But I also tried without before following arbitrary tutorials…

Meh… I’m still trying to figure out how everything should work…

To be precise, I don’t see the subvolume appearing…

I’ll try again without the LUKS stuff…

I also missed that part…
Since I was able to install Manjaro with Btrfs (and even Luks) following that tutorial above…
Can I build the RAID0 btrfs “array” afterward?

Since it strips the filesystem, it shouldn’t be a problem… but well, NOT TESTED with luks.

Happy testing :wink:

It needs some time to figure out, but it is worth the time. You get a neat system… :slight_smile:

And if you start over again… don’t forget to wipefs the signatures… :stuck_out_tongue:

1 Like

Well… I left LUKS for another time and managed to install on BTRFS with RAID0…
The thing is that I didn’t noticed the Don't format option when mounting the partition from manjaro-architect :man_facepalming: and I was choosing Btrfs again which reset the manual raid0 command I was running before… I feel so dumb…

However I’m disappointed because I did the following tests using kdiskmark (run a couple of times for the sequential R/W, so not scientific measures):

  • With Manjaro installed on 3 SSD (RAID0) using mdadm (no encryption): Read ~5900Mb/s | Write ~4500Mb/s
  • With PopOS installed on only one SSD with regular ext4: Read ~2000Mb/s | Write ~1500Mb/s
  • With PopOS installed on only one SSD with encrypted Volume: Read ~1800Mb/s | Write ~1300Mb/s
  • With Manjaro installed on 3 SSD (RAID0) using Btrfs (no encryption): Read ~900Mb/s | Write 300Mb/s


I wasn’t expecting that at all :confused:

Something should be wrong, but I don’t know why… I’m using kernel 5.10, if it matters…

Hi. I just wanted to make an update!
I just discovered that LVM has an option to do data striping.
And the results was astonishing (for me considering how flexible LVM is).
I got around Read ~4800Mb/s | Write ~ 3000Mb/s…
Which is way more than what I got with Btrfs Raid 0…
Though, I’m still thinking something was wrong because it was too bad…

Anyway, I guess I’ll stick with LVM. At least I will be able to change/manage/resize/expand what ever I’ll want…

Thanks again for the direction. I learned a lot…

Just a question… why are you using btrfs on top of lvm?

Also just nice to know… on ever write and read btrfs checks the checksum… That is made for security. I guess there is the bottleneck… if you use an intel cpu you could get great improvments by adding crc32c-intel to MODULES= in /etc/mkinitcpio.conf

In that attempt I tried with LUKS encryption.

Well, this might explain why I had so bad performance (even worse than a single disk).
With LVM striping I got 80% performance of RAID 0 on mdadm, so in this situation I’m not going to use btrfs anyway. Unless this can be disabled? In any case I’m sure the performance I had is not normal otherwise btrfs would not have been so popular, right?

No, it’s not an intel. I have an AMD ThreadRipper 1950x (first generation).
I don’t know if there is an equivalent…

If not found already, here are the mount options for btrfs:

Maybe nodatasum and nodatacow could speed up the rate… also space_cache

Well btrfs is mainly developed for servers… and encryption is there not a big issue, so it is not well tested and improved for this case.

Ok, if you are looking for speed, then btrfs is not the way to go. mdadm or lvm is then a better choice. btrfs is made more for security and flexibilty.

Btw… there is a command called balance … :

       The primary purpose of the balance feature is to
       spread block groups across all devices so they
       match constraints defined by the respective
       profiles. See mkfs.btrfs(8) section PROFILES for
       more details. The scope of the balancing process
       can be further tuned by use of filters that can
       select the block groups to process. Balance works
       only on a mounted filesystem. Extent sharing is
       preserved and reflinks are not broken. Files are
       not defragmented nor recompressed, file extents
       are preserved but the physical location on
       devices will change.

       The balance operation is cancellable by the user.
       The on-disk state of the filesystem is always
       consistent so an unexpected interruption (eg.
       system crash, reboot) does not corrupt the
       filesystem. The progress of the balance operation
       is temporarily stored as an internal state and
       will be resumed upon mount, unless the mount
       option skip_balance is specified.

It is useful for raid0/1 etc…

btrfs balance start /mnt
1 Like

Thanks for the details! Very helpful to understand more.
Since I’m experiencing some other trouble, I might retry some installation and maybe I’ll retry btrfs with those options…
I’ll update here if so…