Can I transform 5 linux_raid_member partitions into a single btrfs of an external HDD, without damaging 100 GB of data? Using Raspberry Pi 3 B+

So I’ve had this setup:

  • QNAP TS-128A
    • 2 TB HDD

And it died, pretty quickly if I may add.
So no more NAS servers for me.

I’m now going for this setup.

  • Raspberry Pi 3 B+ (will be 4 soon)
  • External HDD with 2TB HDD

That’s the same one from the QNAP.

So I’ve noticed that the HDD is split up into 5 partitions…

[folaht@Stohrje-uq /]$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    0   1,8T  0 disk
├─sda1        8:1    0 517,7M  0 part
├─sda2        8:2    0 517,7M  0 part
├─sda3        8:3    0   1,8T  0 part
├─sda4        8:4    0 517,7M  0 part
└─sda5        8:5    0     8G  0 part
mmcblk0     179:0    0  29,8G  0 disk
├─mmcblk0p1 179:1    0 213,6M  0 part /boot
└─mmcblk0p2 179:2    0  29,6G  0 part /home

And they’re all raid partitions…

[folaht@Stohrje-uq /]$ sudo mount /dev/sda1 /mnt
mount: /mnt: type de système de fichiers « linux_raid_member » inconnu.
[folaht@Stohrje-uq /]$ sudo mount /dev/sda2 /mnt
mount: /mnt: type de système de fichiers « linux_raid_member » inconnu.
[folaht@Stohrje-uq /]$ sudo mount /dev/sda3 /mnt
mount: /mnt: type de système de fichiers « linux_raid_member » inconnu.
[folaht@Stohrje-uq /]$ sudo mount /dev/sda4 /mnt
mount: /mnt: type de système de fichiers « linux_raid_member » inconnu.
[folaht@Stohrje-uq /]$ sudo mount /dev/sda5 /mnt
mount: /mnt: type de système de fichiers « linux_raid_member » inconnu.

and frankly, I don’t care about raid partitions.

I don’t know what they are, or what they do,
but I’m thinking I want everything in one little neat btrfs partition.

There’s about 100 GB of valuable data on it, probably on partition sda3.

Is it possible for me to somehow save that data?
I’m not familiar with RAID, so I’m not sure how to handle it.

I do know that there’s a lvm2_member underneath sda3 and doing lvdisplay on it I get this.

[folaht@Stohrje-uq /]$ sudo lvdisplay
  WARNING: Unrecognised segment type tier-thin-pool
  WARNING: Unrecognised segment type thick
  WARNING: Unrecognised segment type flashcache
  WARNING: PV /dev/md2 in VG vg1 is using an old PV header, modify the VG to update.
  LV vg1/tp1, segment 1 invalid: does not support flag ERROR_WHEN_FULL. for tier-thin-pool segment.
  Internal error: LV segments corrupted in tp1.
  Cannot process volume group vg1

But I might be getting ahead of myself there, so I first want to understand what to do about raid partitions first.

So you need to (re)create the raid the QNAP device used and mount that one.

Unknown and I doubt that accessing sda3 “alone” will let you access that data.

The whole partition listing looks weird to me, especially sda3 which seems to span the whole disk and therefor “include” the other partitions. Does that disk have an mbr partition table and sda3 is an extended partition?

$ fdisk -l /dev/sda

Try to scan for mdadm raids and recreate a mdadm.conf, see RAID - ArchWiki.

If it works, you need to mount the detected/created dev/md? device and copy your important data somewhere else to repartition dev/sda.


edit: The whole lvm-stuff wasn’t there, when I comprised my post…

That

looks like a mdadm raid was already detected.
But as said :point_down: it seems to be corrupted.

To answer your question

Can I transform 5 linux_raid_member partitions into a single btrfs of an external HDD, without damaging 100 GB of data?

No you cannot.

From the ouput you supplied it appears the NAS was running with older version of LVM and your attempt to read them - again from the messages - it appears the read could already have altered header information.

Before you experient with anything I recommend you aquire a couple of at least 2TB disks and then use ddrescue or another tool to create an image of the former NAS disk. The image can be stored safely on one, copied to the second and only then you can then work on the image to salvage data if you think it is worth the effort.

Boot a rescue system eg. SystemRescueCD or another suitable tool (sysres is based on Archlinux and xfce so it should be familar)

  • ddrescue
  • testdisk
  • photorec

The size of 1.8TB equaling the size of the disk is probably the volume holding the virtual partitions but that leaves the question where did the data go.

It appears there is two 577 MB partitions and one 8G - but where is the rest?

If you have no intention of aquiring extra disks then you can use testdisk or photorec on the device - but either way you need an extra disk with enough space to hold the files found by testdisk or photorec.

Then investigate which data is stored on which partition.
Then you could transfer the data with rsync or cp to the external disc.

Or you attach the USB disc to the NAS directly.

What of the NAS died ?
The discs or the NAS itself ?

The NAS itself.
And I have given up upon it.

It’s goodbye Storj adventure for me,
on to the next decentralized storage network.

You must find out, which RAID level the NAS used.

It’s too late.

Because I misintrepreted a developer’s post saying “your last attempt” on the Storj forum,
and because I have a copy of the identification files, I thought I maybe could continue without the files.

I was wrong but the disk has already been reformatted.

And I’m betting now that Storj will be overtaken by competitors with better networks soon.

I just hope enough copies were made and no one’s data went missing.

Good to hear that!

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.