How to remove remnant RAID

I had a disk fail on my NAS a few weeks ago and while trying to diagnose it I researched how to add the RAID volume to this PC (since QNAP’s interface would not work). Now though, this non-working setup seems to still be in the system though I think I reversed all the steps when removing it. I didn’t find a good overview of how this all fits together, so although I tried to follow the sequence described by a couple of different people, I still can’t say I understand it properly.

Gnome Disks shows:

  • 8.0 TB RAID-1 Array at /dev/md/1_0
  • 80 GB Block Device at /dev/vg1/lv544
  • 7.9 TB Block Device at /dev/vg1/lv1
sudo mdadm --detail /dev/md/1_0
$ sudo mdadm --detail /dev/md/1_0
/dev/md/1_0:
           Version : 1.0
     Creation Time : Fri Jul 10 15:27:34 2015
        Raid Level : raid1
        Array Size : 7804070912 (7.27 TiB 7.99 TB)
     Used Dev Size : 7804070912 (7.27 TiB 7.99 TB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Sun Jul 27 18:20:33 2025
             State : clean, FAILED 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

    Number   Major   Minor   RaidDevice State
       2       8       35        0      active sync   missing
       -       0        0        1      removed

This seems to be the same thing…

sudo mdadm --detail /dev/md124
$ sudo mdadm --detail /dev/md124
/dev/md124:
           Version : 1.0
     Creation Time : Fri Jul 10 15:27:34 2015
        Raid Level : raid1
        Array Size : 7804070912 (7.27 TiB 7.99 TB)
     Used Dev Size : 7804070912 (7.27 TiB 7.99 TB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Sun Jul 27 18:20:33 2025
             State : clean, FAILED 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

    Number   Major   Minor   RaidDevice State
       2       8       35        0      active sync   missing
       -       0        0        1      removed
$ sudo mdadm --stop /dev/md/1_0
mdadm: Cannot get exclusive access to /dev/md/1_0:Perhaps a running process, mounted filesystem or active volume group?
$ sudo mdadm --stop /dev/md124
mdadm: Cannot get exclusive access to /dev/md124:Perhaps a running process, mounted filesystem or active volume group?

No file system is mounted & the disk is not present. vgs and vgdisplay produce no output.

Disclamimer - I am no expert on mdraid and even less with LVM - I only have basic knowledge on setting up mdraid.

You need to set the missing raid member to failed before you can remove it

Or you can use cgdisk and change the partition type to 8300 - it may even prompt you if you want to remove raid signatures.

Also check your /etc/mdadm.conf

My server hosting my notepad et.al. took a nose dive and I am in a slow process of getting it back in a functional state at which point I decided to dust off up my limited mdraid knowledge.

Thanks for the reponse.

There’s no real config in /etc/mdadm.conf as I’ve never set up anything on this computer. I just used some commands to get an idea of what was going on with the disks since the Synology was unresponsive.

So I need to do mdadm <device> --fail? But I’m not sure what exactly it expects for <device>.

$ mdadm /dev/md/1_0 --fail
mdadm: error opening /dev/md/1_0: No such file or directory
$ mdadm /dev/md124 --fail
mdadm: error opening /dev/md124: No such file or directory

Somewhere in your configuration - references exist - it may be disks in your system where partitions have raid partition type - which is why I mention changing the partition type for the relevant partitions.

If you have completely removed the disks and put them back into your disk station - I suggest you check your remaining disks - I have no low level knowledge - I am guessing - perhaps with one of the commands you have assigned one of your local disk as a raid member - just guessing…

There are no RAID disk present in my system.

Then you need to check your remaining partitions’ partition type…

You don’t get errors like this out of the blue… you have - inadvertently perhaps - changed something that makes the kernel think you have a raid configuration…

Check the configurations in /etc/lvm - if I recall correct - synology uses a combination of lvm and mdraid

Well, what I didn’t do before replying was either open Disks or GParted or run mdadm --details /dev/md/1_0 again.

The disk is no longer there after changing kernels the other day. Sorry to waste your time.

Closed then …