Compiling a Linux Driver for my HBA versus SATA Raid under Manjaro

Okay, after 1-2 weeks of focusing :100: on evaluating Manjaro KDE Plasma as my windows replacement… I’d say my evaluation has concluded that now is the time to ditch windows entirely.

One of the only things that didn’t appear to be detected/set-up right away post Manjaro Install was the mirror built on my HighPoint HBA… Manjaro sees the two hard drives independently; not as one mirror.

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sdb           8:16   0   7.3T  0 disk 
├─sdb1        8:17   0    16M  0 part 
└─sdb2        8:18   0   7.3T  0 part 
sdc           8:32   0   7.3T  0 disk 
├─sdc1        8:33   0    16M  0 part 
└─sdc2        8:34   0   7.3T  0 part 

Running the inxi --admin --verbosity=7 --filter --no-host --width command shows that some (I assume ) generic driver was used…

RAID:
  Hardware-1: HighPoint Device driver: mvsas v: 0.8.16 port: N/A 
  bus-ID: 24:00.0 chip-ID: 1103.2720 rev: 03 class-ID: 0104 

Perhaps there is a setup needing to happen to get the drives to appear as one mirror under the existing driver… but then I found that HighPoint does provide a specific Linux open source driver for my HighPoint RocketRAID 2720A HBA.

There is no guarantee that even this new Driver is going to solve the mirror detection right away… but I do recall that the “WebGUI - RAID Management Interface” (also listed with the driver to install) did have a tool for importing/exporting the RAID configs… so I suspect that would force the Linux driver/tools to recognize the pre-existing mirror.

The readme contained within the driver’s tar.gz makes it sound like a simple process… but then in the notes goes on to talk about kernel development packages and kernel driver modules… topics I am no where near understanding yet…

#############################################################################
2. Installation
#############################################################################

  1)  Extract the source package to a temporary directory.

  2)  Change to the temporary directory.

  3)  Run the .bin file to install the driver package.

    # chmod +x ./rr272x_1x-linux-src-vx.x.x-xx_xx_xx.bin

    # ./rr272x_1x-linux-src-vx.x.x-xx_xx_xx.bin
    or
    $ sudo ./rr272x_1x-linux-src-vx.x.x-xx_xx_xx.bin
    on Ubuntu system.

  NOTES:

    The installer requires super user's permission to run the installation.
  So if you are not logged in as root, please supply the password of root to
  start the installation.

    The installer will check and install build tools if some of them is missing.
  A network connection is required to install the build tools. The following
  tools are checked by the installer:

    make
    gcc
    perl
    wget

    They are installed automatically when you select "Software Development
  Workstation" group in the installation of RHEL/CentOS 6.

    When packages are installed from network, it may take too long to complete
  the installation as it depends on the network connection. The packages could
  be installed first to omit the network issue.

    If the installer failed to find or install the required build tools, the
  installation will be terminated without any change to the system.

    The installer will install folders and files under /usr/share/hptdrv/.

    And the auto build script will be invoked to download and install kernel
  development package if needed and the build driver module for new kernels
  automatically when system reboot or shutdown.

But I am also recalling an memory where I heard that RAID (software raid?) under Linux is very robust and stable… unlike Windows software RAID.

So my question is… if you were in my shoes and trying to decide between an HBA (which isn’t a full hardware raid card that relies on system resources… hmm, does that mean it’s a FakeRAID?) and just yanking out the HBA so you could direct connect the drives to SATA and build a Linux RAID (mirror)… what path would you choose and why?

Note 1: The data on the Mirror is not a deciding factor, as it was 100% copied over to my NAS before I installed Manjaro… reformatting the mirror as EXT4 is in the cards.

Note 2: I noticed in the driver release notes that the max kernel mentioned is 5.10… and while that is the latest LTS kernel for Manjaro today, I found better performance/stability for my AMD 5600X CPU and 6800XT GPU by moving to the 5.13 kernel… so I’m not sure if my being on 5.13 is a limitation for a driver with kernel support listed as high as 5.10, but it definitely strikes me as a concern.

can you check on this , in AUR

yay -Ss highpoint
aur/rr62x 1.2-2 (+0 0.00) (Orphelin) 
    Kernel modules for Highpoint RocketRAID 62x SATA/6Gbps Card.
aur/rr62x-dkms 1.2-1 (+0 0.00) (Orphelin) 
    Kernel modules for Highpoint RocketRAID 62x SATA/6Gbps Card. (DKMS Version)
aur/rr264x-lts 1.5-3 (+0 0.00) (Orphelin) 
    Kernel modules for Highpoint RocketRAID 2640X1 SAS Card. linux-lts version
aur/rr264x-dkms 1.5-1 (+1 0.00) (Orphelin) 
    Kernel modules for Highpoint RocketRAID 2640X1 SAS Card. (DKMS version)
aur/rr264x 1.5-1 (+1 0.00) (Orphelin) 
    Kernel modules for Highpoint RocketRAID 2640X1 SAS Card.
aur/hptwebgui 2.3.1-1 (+1 0.00) (Orphelin) 
    WebGUI for HighPoint RocketRaid 2xxx/3xxx/4xxx RAID Controllers

These are not updated, they are orphaned AUR packages from 6 years ago, it is not the version he’s looking for apparently. He needs rr272x from what he wrote.

More than anything, I think my core questions are…

if you were in my shoes and trying to decide between fiddling with an HBA (which isn’t a full hardware raid card that relies on system resources… hmm, does that mean it’s a FakeRAID?) and just yanking out the HBA so you could direct connect the drives to SATA and build a Linux RAID (mirror)… what path would you choose and why?

I would go the simpler way if that doesn’t change anything in performance and reliability. But I don’t know I never made a RAID on Linux (I think it is useless especially nowadays with high speed SSD).

Thank you for the reply. The RAID-1 is just for data… not booting from it or installing games/apps onto it… I use my nvme for that.

I prefer to have a bit more redundancy and keep a copy of my NAS data on a local mirror.

I guess I’m just trying to figure out if the BHA is worth keeping installed with Manjaro… there was a period in time when I yanked it out to try AMD-RAID, but put it back when I found I didn’t like some of driver limitations that put on Windows.

I’d always heard to stay away from a Windows Software Raid… and was hoping to hear some good news about Linux Software Raids (mdamd?) that would help guide me to make an informed decision.

This could be a good starting point to gather info I guess RAID - ArchWiki

If it was my system, I would err to the latter. “Do I have enough SATA ports?” If so, I’d use the SATA ports directly on the motherboard and use a software RAID solution (either mdadm, with EXT4/XFS on top, or ZFS from a mirror vdev).

Why? Because I wouldn’t be reliant on a hardware-controlled RAID solution, and thus I can transfer the drives to any other PC, invoke mdadm (or ZFS), and have immediate access to my data.

Sure, software RAID is “slower” than hardware, and on the surface doesn’t seem as streamlined, but with modern computers the speed factor is a non-issue (if you can even tell there’s a difference in speed.)

If your NAS server runs ZFS, then you can even leverage the strength of record-based replications from your server to your local ZFS pool.


EDIT: For the record, I’ve tried both mdadm and ZFS (mirror vdev), and have been happy with both. Even on an older system, mdadm did not have any notable issues with speed. (But with OpenZFS 2.0+, even though it’s only available on the AUR, it’s a mature and stable solution on Linux. It might require a little extra work to get things up and running for the first time.)

2 Likes

Thank you for the reply winnie!

I have 3 (of 4) open SATA ports thanks to 3 nvme’s… only one old 840 EVO occupies a SATA port currently, so I anticipate no SATA shortages in the foreseeable future.

Could you share your opinion on EXT4 versus XFS (I think I’ll be sticking with mdadm)?
I was trying to learn from this post… but found that many opinions/experiences seemed to be contradicting each other… everyone’s got an opinion after all, and one persons trash is another’s gold :wink:

That’s a great reason, and really hits the nail on the head for where I find myself right now.

I mean there is a part of me that is contemplating keeping the mirror available to windows as I may for the short term keep the one nvme that’s booting windows as is (perhaps until I figure out virtualization) just so I can go back and fetch something not migrated in my transition to Manjaro… but I guess it doesn’t matter if windows sees it’s old data drive in that scenario.

Speed isn’t a concern for me… provides me access to a backup of my backup if/when I need it.

I setup my Synology over 5 years ago (long before any significant Linux knowledge)… and selected the initial choices that gave me 2 drive fault tolerance… which apparently used btrfs…


… perhaps that changes in the future as I learn more :wink:

I won’t get into it too much, because as you know, if there’s anything more volatile than politics, it’s software. :wink: (Many lives have been lost in the massive and ongoing GNOME vs KDE wars.) :sleepy:

I hosted a party a while back and invited families of Democrats and Republicans. We had healthy discussions and debates with much respect for each other… until some naive fool just had to blurt out “So which desktop environment do you use?” :man_facepalming: Mayhem ensued.

I’ve used EXT3 (early on) and then EXT4 exclusively, until a couple years back when I completely switched over to XFS. The reason for ext3/4 was because I followed the guideline of “just use the defaults, you’ll be okay.” And it’s true. It has served me well, as when it comes to storing your data “boring” is better than “exciting” and niche. However, XFS has matured tremendously, has active development behind it, and is now the default file-system for Red Hat Enterprise Linux. It performs very well on solid state media, and is quite friendly with “trimming.” You can research some benchmarks between the available file-systems, as well as low-level scans that show how “tight” and contiguous unused pages are laid out on an SSD after usage-and-trimming. (It’s also a reason why SSDs make for poor “plausible deniability” drives if you plan on using LUKS encryption, since low-level forensics can reveal how much space is currently being occupied on the SSD simply by comparing the used to unused pages. (You didn’t make enemies with anyone in the CIA, did you?)

My own use with XFS reflects what benchmarks show about its performance over EXT4. But someone will come in here and say I’m full of it. :wink: Hey, whatever works best for you is what matters.

You might not notice any difference between the two, and there’s nothing wrong with using either of them atop mdadm/LVM.


How true that saying is! The article you linked spans from 2012 to 2014, though. It’s quite old and much has changed and improved since then.

Pound for pound? ZFS exceeds mdadm in terms of robustness and reliability. ZFS itself is a beast and the more you read about it, the more you think “How the heck does it do all of that?!” The catch is it has a greater learning curve, takes some time to understand and setup, and you need to know how to leverage its features. As it stands now, it’s not an “out-of-the-box” solution for Linux desktops.

(ZFS isn’t just a “file-system” per se. It’s an all-in-one solution that combines the role of file-systems and metadata, data integrity and redundancy, snapshots and delta replications, native encryption, as well as other modular features and safeguards.)

You might be better served with mdadm, as long as you’re keen on keeping an eye on your disks’ health and staying on top of backups and recoveries. It’s mature and more “to-the-point”, and thus more familiar. A traditional file-system sits on top of your software RAID, and you use standard methods, such as “fstab” and mount commands.


I don’t think I understand this part? It’s not just that Windows doesn’t understand mdadm: it doesn’t understand EXT4/XFS/ZFS, anyways. The only way to have the data available for Windows and Linux would be to use an interoperable file-system, such as NTFS or ExFAT (on top of hardware RAID, in your case, which again you lose more flexibility.)

EDIT: Yes, I am aware there are third-party solutions for reading non-native file-systems on Windows, but it’s not something I would lean on myself.

1 Like

Thank you for the reply winnie!

I like the baby-steps approach… crawl, walk, then run. My mirror will only be 8TB, so I don’t see any immediate gains/features pulling me in the XFS direction… and that may be because I don’t fully understand what I’m reading… but knowing I can start with “boring” EXT4 and “be okay”, that’s music to my ears :innocent:

I unintentionally omitted some context… the drives I want to mirror are formatted NTFS right now, meaning they are both readable by both OS’s… but then I realized that even if I booted windows to retrieve something “tomorrow” I wouldn’t care if it had access to the mirror anymore, as it’s contents wouldn’t be what I was after while in windows.

I’m also planning on one large partition/volume, so I didn’t think I’d need to worry about using LVM in my case… I’m of course assuming it’s optional :slight_smile:

If I’m reading the Arch Raid Wiki right, I figure my steps should be…

  1. Prepare the devices… mdadm --zero-superblock /dev/sdxx (drive and/or partitions? currently NTFS)
  2. Re-partition the two drives (omitting g step as drive is already gpt) so they each have only one partition with the partition type Linux RAID in fdisk… use fdisk /dev/sdx to remove the existing NTFS partitions (‘p’ to list them ‘d’ to delete them one by one), add a new partition (n for new, p for Primary. 1 partition, default start, default (All space)), change the partition type to Linux RAID (t to change the type to 29), and w to write all the changes.
  3. Build the array with mdadm… # mdadm --create --verbose --level=1 --metadata=1.2 --raid-devices=2 /dev/md/RAID1Array /dev/sdb1 /dev/sdc1
  4. Wait for mdadm to finish, checking it’s progess with… cat /proc/mdstat
  5. Explicitly add the array to mdadm config… # mdadm --detail --scan >> /etc/mdadm.conf
  6. Assemble the array… # mdadm --assemble --scan
  7. Format the array… I’m thinking because this is a mirror that I may not need to specify stride/strip-width (since RAID1 was absent from the wiki format examples)… or do I need them? I was initially thinking # mkfs.ext4 -v -L DataMirror -b 4096 /dev/md0 may be what I need?
  8. Once formatting is complete… reboot and make sure the Raid array is still seen/available?
  9. Some point soon setup scrubbing / RAID Maintenance

Hmmm… does step 3 also set the mount point like I might do for other drives in fstab? Or would I need to create an fstab entry as well at some point?

Not necessary. You never used them as part of an mdadm array. You can skip to the partitioning steps (preferably creating a new partition table, either gpt or msdos/mbr).


These days I prefer parted over fdisk. It supports gpt and msdos/mbr tables, and it defaults to aligning the start of a partition. (Gparted and KDE Partition Manager use parted on the backend.)


Your hunch is correct. There is no tweaking neccessary for a mirrored array: stripe and parity are irrelevant. It’s one of the beauties of mirror vs the other layouts: it’s simpler, rebuilds much faster, and has no overhead cost of calculating parity bits.

I believe mkfs.ext4 now defaults to a 4K block size, anyways.


It happens in two steps (three or four if you’re also using LVM and LUKS). First the array is assembled, based on your conf file. Secondly, once available, it is /dev/md0 that must be specified in your fstab as the ext4 file-system you wish to mount automatically / upon rebooting.


EDIT: Just to be clear for anyone else who might stumble upon this: once you get started, everything that exists on the drives you wish to use will be gone. Don’t even start at step 1 until you’re 100% sure you don’t need the data on the drive(s).

1 Like

Thank you again winnie, I’ll look into parted.

You can use KDE Partition Manager (VERY CAREFULLY, MAKING SURE YOU ARE WORKING ON THE DRIVES TO BE WIPED) if you prefer a user-friendly GUI version of parted.

Othwerise, just invoke parted on each drive separately, such as,
sudo parted /dev/sdf

Upon entering parted, you can type “help” to view a list of commands. Unlike fdisk, parted applies the commands immediately, not upon exiting.

1 Like

I think you’re right winnie. The only thing I did not see in KDE Partition Manager was a way to set the partition type to raid… but if I accomplished all partition removal and new partition creation in KDE-PM and then ran parted /dev/sdx to set 1 raid on that isn’t so bad.

I found this nice little parted guide for raid disks… but I think it assumes you are starting from blank/wiped disks.

Yup .

For file-system type, you can select from the drop-down menu “unformatted”.

The steps later on (mdadm, mkfs.ext4) are what matters in the end.

1 Like

Really good tip… I probably would have selected EXT4 on autopilot… so good to note to select unformatted.

EDIT: oh cool, I just realized I’ll already know the UUID from the /etc/mdadm.conf file updated by the # mdadm --detail --scan >> /etc/mdadm.conf command, so that’ll make updating fstab much easier… no need to use sudo blkid to dig it out

I’m at the point where the array is being initializedsudo mdadm --create --verbose --level=1 --metadata=1.2 --raid-devices=2 /dev/md/RAID1Array /dev/sdb1 /dev/sdc1

While waiting, I started looking around and noticed that I’m seeing references to md127 in KDE Partition Manager and the cat /proc/mdstat progress check. Dolphin on the other hand shows a /dev/md entry with the forming array.

So I am just curious at what point in the process will the RAID array be seen/referenced as /dev/md0? Is md127 just a temporary “in progress” identifier? 127 make me think of “local”/loopback (127.0.0.1).

Still to be done are… updating the mdadm config sudo mdadm --detail --scan >> /etc/mdadm.conf, assembling the array sudo mdadm --assemble --scan, formatting the array sudo mkfs.ext4 -v -L DataMirror -b 4096 /dev/md0 (or will it be md0?), and updating fstab to set my mount point.

P.S. I’ve come to realize that the “ETA”/finish time calculated in the status check is based on the percentage remaining and speed, which makes sense as it can’t forecast for the slowdown of every mechanical drive as it approaches the outer edge of the disk. I wish the initial 11 hour projection was true… resync = 0.3% (27497280/7813893120) finish=649.8min speed=199708K/sec… because 10.5 hours later, I thought this would be done in 30 minutes but there is still an ETA of 3.5 hours resync = 78.2% (6116768768/7813893120) finish=212.8min speed=132900K/sec , which I figure is going to keep pushing out as the drive continues to slow down. I had thought I’d be setting up the data transfer before bed… but it looks like that’ll have to wait for the morning.

Hmmm, seems I’ve hit a wall and yet made great progress?

After the array was initialized…

cat /proc/mdstat
Personalities : [raid1] 
md127 : active raid1 sdc1[1] sdb1[0]
      7813893120 blocks super 1.2 [2/2] [UU]
      bitmap: 0/59 pages [0KB], 65536KB chunk

… I ran the next command and was denied…

sudo mdadm --detail --scan >> /etc/mdadm.conf
bash: /etc/mdadm.conf: Permission denied

I did a search and learned that if I did a sudo -i first it flipped me to root, and then the command would work.

I looked at the contents of the /etc/mdadm.conf and found the following line was added… ARRAY /dev/md/RAID1Array metadata=1.2 name=AM4-x5600-Linux:RAID1Array UUID=54abcbfa:cc3fbecd:e4aad7e5:7961912d

Oh, and I also noticed this time that a “file”/link also exists in /etc called md127
Screenshot_20210726_063627

I thought I would try proceed… thinking that perhaps a further step would resolve the md127/md0 riddle… [AM4-x5600-Linux ~]# mdadm --assemble --scan gave no feedback, so I assume it worked… but then I ran into trouble with the next formatting command…

[AM4-x5600-Linux ~]# mkfs.ext4 -v -L DataMirror -b 4096 /dev/md0
mke2fs 1.46.2 (28-Feb-2021)
The file /dev/md0 does not exist and no size was specified.
[AM4-x5600-Linux ~]# mkfs.ext4 -v -L DataMirror -b 4096 /dev/md127
mke2fs 1.46.2 (28-Feb-2021)
/dev/md127 contains `OpenPGP Public Key' data
Proceed anyway? (y,N) N
[AM4-x5600-Linux ~]# mkfs.ext4 -v -L DataMirror -b 4096 /dev/md/RAID1Array
mke2fs 1.46.2 (28-Feb-2021)
/dev/md/RAID1Array contains `OpenPGP Public Key' data
Proceed anyway? (y,N) N

In any of the scenarios I tried, I wasn’t comfortable proceeding with the “Proceed anyway” prompt… so I aborted. I’m not sure how much of a pickle I’ve created here…

  1. Is part of the issue not understanding that I needed to sudo -i as a first step?
  2. Copy/pasting/revising the wiki example, I noticed I specified the /dev/md/RAID1Array in my sudo mdadm --create --verbose --level=1 --metadata=1.2 --raid-devices=2 /dev/md/RAID1Array /dev/sdb1 /dev/sdc1 command… should that have been a reference to md0? or is it because I didn’t execute the command after sudo -i that md127 was picked up?
  3. Can I recover from this step, or do I need to start over?
  4. Is the /dev/md127 contains "OpenPGP Public Key" data Proceed anyway? (y,N) N prompt expected and I should proceed?
  5. A little online search revealed

Any time something is mounted as md127 it almost always means there is no entry for this mdadm array in the mdadm.conf in initramfs (which is separate from your actual /etc/mdadm.conf).

… not sure what do do with this info, or if it’s accurate… but dracat (from the above link) is not listed in the Arch Wiki, so I’m not sure what our equivalent command would be.

After a bit more digging I found another post which seemed to be in step with the reference in the Arch RAID wiki to “Note: Every time when you make changes to /etc/mdadm.conf, the initramfs needs to be regenerated.”… so I executed # mkinitcpio -P and rebooted; however, all seems as it was before.

I thought to full-stop before I made anything worse… but then I wasn’t sure if this really was a bad situation. What if the format command was just seeing some old NTFS remnants from these previously used drives and confusing it with “OpenPGP Public Key” data? And what if the md127/md0 “issue” resolves after formatting the partition… so I pressed on… and decided I was happier to proceed with using the /dev/md/RAID1Array reference…

[AM4-x5600-Linux ~]# mkfs.ext4 -v -L DataMirror -b 4096 /dev/md/RAID1Array
mke2fs 1.46.2 (28-Feb-2021)
/dev/md/RAID1Array contains `OpenPGP Public Key' data
Proceed anyway? (y,N) y
fs_types for mke2fs.conf resolution: 'ext4', 'big'
Filesystem label=DataMirror
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
244187136 inodes, 1953473280 blocks
97673664 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4102029312
59616 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Filesystem UUID: 6487110f-670a-4bac-b88f-e422fb107071
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
        102400000, 214990848, 512000000, 550731776, 644972544, 1934917632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done 

This appeared to be good news!

So I pressed on and used the UUID from a fresh sudo blkid command to create an /etc/fstab entry… UUID=6487110f-670a-4bac-b88f-e422fb107071 /data/raid1 ext4 defaults,noatime 0 0 (after first mkdir /data/raid1)… and was able to mount /data/raid1 without issues… more good news! (Note: the UUID created during the format matched the blkid UUID… seems the mdadm.conf UUID is different)

So as far as I can tell things have worked out (in many ways) so long as I focused on /dev/md/Raid1Array instead of /dev/md0 (or dev/md127)… was md0 a red herring? Am I actually successful/complete? or is there still a step missing to take care of md127/md0?

Note: I also found I needed to execute $ sudo chown $USER:$USER /data/raid1 so I could add/delete data to/from the drive without needing to use sudo all the time. And a data copy is now in progress $ rsync -vrh --progress --exclude="@eaDir" /data/synology/* /data/raid1/.

It must have been different when I used it back in Ubuntu and openSUSE, but there’s no need to start over again. If no name (i.e, “0”) is specificied during creation, it will symlink /dev/md/Raid1Array to /dev/md127

This can easily be fixed very quickly.

Unmount the file-system and proceed with:

sudo mdadm --stop /dev/md/Raid1Array
sudo mdadm --assemble /dev/md/Raid1Array --name=0 --update=name /dev/sdx1 /dev/sdy1
su -c “mdadm --detail --scan >> /etc/mdadm.conf”

You might have to delete older entries in your ARRAY list in mdadm.conf.

This will force mdadm to update the name before assembling it again. Now your symlink changes automatically, pointing to /dev/md0 instead of /dev/md127. (It makes no real difference if you’re using your more friendly name, such as /dev/md/Raid1Array or the UUID.)


As for the OpenPGP signature warning? That’s very, very weird. Perhaps it was from remnants of a previous table. I went through your steps on Manjaro KDE (using loop devices instead of real drives), and never encountered such an odd warning.)


For reference, here is what my line looks like in /etc/mdadm.conf, which correctly assembles my array without any extra specifications or flags:

ARRAY /dev/md/Raid1Array metadata=1.2 name=linuxpc:0 UUID=e98cc981:eb112c82:44b734fd:3154f16b

It properly assembles with:

sudo mdadm --assemble --scan

When checking /dev/md/Raid1Array, it is a symlink to /dev/md0.

An alternative approach for my entry can look like this, if I don’t use the UUID:

ARRAY /dev/md/Raid1Array metadata=1.2 name=linuxpc:0 devices=/dev/loop0,/dev/loop1


EDIT: My tests with loop devices will fail upon reboot, since they will not be available as block devices that mdadm scans for matching signatures/UUID in the superblock. In your case, it’s assumed that your two drives (and hence two partitions that have mdadm superblocks) are avaialble upon rebooting, and thus will be assembled according to mdadm.conf, and thus the ext4 file-system will be mounted according to your fstab.

In your fstab you can use the device name of /dev/md0, or /dev/md/Raid1Array, or the UUID of the file-system (not the mdadm UUID). To make things easier, might as well stick to using /dev/md/Raid1Array since it will always be a symlink that points to the correct device.


NOTE:

  • UUID in mdadm superblock is used to select the correct devices/partitions and assemble them together.
  • UUID for the file-system is used to mount, fsck, etc, the partition/device that the file-system was formatted on.

You can grab the latter by running this command on the fully assembled array:

sudo dumpe2fs /dev/md/Raid1Array | grep UUID

This will only work for ext2,3,4, but not for XFS.


I still stand by the fact that it’s easier to use the friendly name (/dev/md/Raid1Array) to specify and mount your file-system, since it will predictably use this reference based on your ARRAY line in mdadm.conf. There’s no reason this symlink will be named something random between reboots.

1 Like