Dd copying my system nvme to a faster and larger nvme?

I just bought it. Soon I will try to use it as my system disk.

I’m using btrfs with snapshots on the system disk right now.

So without commands for now my plan is:
Put the faster, larger nvme (fln) in the faster m2 slot, put the slower, smaller system nvme (ssn) away from there in the slower slot.
From a live system, dd copying the ssn to the faster drive fln
Resize the system partition (I don’t have a seaparate /home partiton) to fill the complete size of fln or create a new partition to do the same
Use fln henceforforth for my system and ssn as an extra drive.
Therefore reformat ssn, maybe to ext4 maybe to btrfs.

Does my plan sound sound? What do I have to consider regarding booting? Regarding UUIDs and fstab? Regarding grub? I used the Calamares standard encryption method for my ssn… At the time, when both are attached and ssn is cloned to fln but not reformatted, will there be conflicts? Will the replugging from PCIe 4.0 x3 to PCIe 3.0 x3 have to be considered in a special way for my plan?

Don´t ! :see_no_evil:

There is a reasonable warning about doing ANY block copy of a btrfs filesystem.
(You may lose both file systems at the same time. I know this from experience.)

Please also read at “readthedocs” what btrfs developers say about this.

(This problem may change after kernel 6.7)

I can´t say if this applies while the filesystem is encrypted and not seen by the kernel. BUT be careful. The problem arises not when mounting, but already when the kernel “sees” both filesystems.

2 Likes

Wow, this sounds kinda risky :smiley:

Move a volume

There is an easy and secure way to move a volume to another disk/device. If you use Btrfs itself to move the volume, there will be no danger. You even can do this while the volume is in use.

  • Create the partition you want to use as destination without formatting it. Or remove the filesystem when one is present
  • Add the destination device to your volume by

root # btrfs device add /dev/[destination] [path to filesystem]

  • Remove the source device from your volume by

root # btrfs device remove /dev/[source] [path to filesystem]

Btrfs will notice, that it is necessary for this setup to move all data from the source device to the destination device. And it will start immediately to move data in the background. Meanwhile you can use your PC as you want.

  • Empty Blocks will not be moved
  • Compressed data will remain compressed
  • All Snapshots will remain
  • The UUID of the filesystem will remain the same, but btrfs will be aware of this
  • If you used the UUID to identify your volume, you even wont´t need to edit /boot/grub/grub.cfg and /etc/fstab
  • Only, don’t shutdown while the move of the volume is not complete.

If you want to watch the volume move, inside a terminal:

user $ pamac install procps-ng

root # watch -n 60 btrfs filesystem show /

So you could still be mounted and move everything over… This I can not comprehend. At one time, when the moving is done, I can just power off then? :smiley:
If one tries that then better from a live system. Not sure if I want to.
Volume will be everything including subvolumes?

Yes from the point of view of ext4, mdadm, reiser, … from the past this is impossible.

But with Btrfs

  • You add a device (so it can use both devices). This is fast.
  • You remove a device (so that it cannot be used in the future). That’s a bit slow.
  • BTRFS will now start moving data.
  • The deletion process will be completed at the point when all data has been moved.

If you turn off the system in between, btrfs remembers that this process must be completed the next time you start the system. After booting, btrfs continues moving data until ALL data is moved. The task is then marked as complete and the removal is completed. (I did this several times)

If you are afraid, you can try this from a live system. But you will need to mount the filesystem to be able to do this. So i do it from the normal running system. You even can watch the progress :smiling_face_with_three_hearts:

When using this way, the btrfs filesystem will keep its UUID because when adding the device, it will get the same UUID as the first device (but deviceNr+1 ID:+1) so the kernel will not be confused.

But no matter how you do it, always make a backup externally first (where the kernel can’t reach)

:footprints:

1 Like

Hm, add, ok. The remove part makes me shakey. I could detach ssn (old, slower nvme) and try to boot from the new one too? Then I’d at least have a backup until I know it works.

I spoon fed you the commands but removed them again since you did not seem to appreciate it.
Good luck! :slight_smile:

1 Like

Yeah probably didn’t hit that heart yet. :wink: Thanks though.

You think THAT was what it was about? Ok buddy.
Good luck, I’m pressing mute on the thread. :rofl:

But Andreas is a master of btrfs so you don’t have to worry about anything. :slight_smile:

1 Like

I can understand that. Trust in btrfs only comes when you have already done it several times, or when you understand exactly why and how it works.

An important aspect is that btrfs NEVER overwrites data. (CoW) So this will even work if the power supply fails in between.
I don’t know of any file system that is so robust (except JFFS2). We use btrfs with about 30T devices, which are often simply switched off instead of being shut down.

But i cant´t help you with the encryption - part :man_shrugging:

1 Like

Thank you AK for your explanations. Someone at the btrfs irc mentioned I should do it with btrfs-replace(8) — BTRFS documentation
Would this incude getting
/dev/nvme0n1p1 4096 618495 614400 300M EFI System
over to the new drive? And then henceforth would this suffice to get a bootable new drive?

As far as I understand all should be a big container. Grub just needs to be pinted at the UUID of the right disk to cryptomount at boot. Which is what makes dd handy for ext4 cloning.

Replace seems to me like add and delete in one big step.

But i cant help you with that. I never used it. :man_shrugging:

1 Like

(but deviceNr+1)
Can you give an example, like what would change here?
/dev/nvme0n1p1
/dev/nvme0n1p2
/dev/nvme0n1p3
would be the current system?

What i mean is not the linux - devicename (from /dev/… ) but the device-Nr (ID: ) inside btrfs:

sudo btrfs device usage /                                                                                                                        ✔ 
/dev/sda2, ID: 1
   Device size:           900.00GiB
   Device slack:              0.00B
   Data,RAID1:            709.00GiB
   Metadata,RAID1:         13.00GiB
   System,RAID1:           32.00MiB
   Unallocated:           177.97GiB

/dev/nvme0n1p3, ID: 3
   Device size:           900.00GiB
   Device slack:              0.00B
   Data,RAID1:            709.00GiB
   Metadata,RAID1:         13.00GiB
   System,RAID1:           32.00MiB
   Unallocated:           177.97GiB

At some time i seem to have deleted the device with ID: 2 from the filesystem.

These do have the same UUID: 3487…

lsblk -o PARTUUID,KNAME,FSTYPE,SIZE,UUID|grep -E '3487|UUID'                                                                              ✔  4s  
PARTUUID                             KNAME     FSTYPE   SIZE UUID
3ee1dfe1-19af-4102-945d-90d957d3c199 sda2      btrfs    900G 3487ba3d-1cba-4cdc-a043-c420ebca2aca
7b64fe2b-61d7-474b-9e9b-ea0599578e2d nvme0n1p3 btrfs    900G 3487ba3d-1cba-4cdc-a043-c420ebca2aca

I do have RAID1, but it is the same with RAID0 ( You first get RAID0 when adding a device )

Including all Snapshots,Subvolumes and everything. (If already compressed, staying so) …

:footprints:

1 Like

I would do it like that… on a live session:

Dump the partition table:

sudo sfdisk --dump /dev/nvme0n1 > partition_table

Write the exact same on the second nvme:

sudo sfdisk /dev/nvme1n1 < partition_table

Then create fat on /dev/nvme1n1p1:

sudo mkfs.vfat /dev/nvme1n1p1

Mount both:

sudo mount -m /dev/nvme0n1p1 /mnt/efi-old
sudo mount -m /dev/nvme1n1p1 /mnt/efi-new

Copy all files:

sudo cp -R /mnt/efi-old/* /mnt/efi-new

Unmount:

sudo umount /mnt/efi-old
sudo umount /mnt/efi-new

Glue together /dev/nvme0n1p2 and /dev/nvme1n1p2, which are btrfs, as a JBOD:

sudo mount /dev/nvme0n1p2 /mnt/btrfs-root
sudo btrfs device add /dev/nvme1n1p2 /mnt/btrfs-root

Then remove the old one:

sudo btrfs device delete /dev/nvme0n1p2 /mnt/btrfs-root

:notebook: This will copy/move all data 1:1 to /dev/nvme1n1p2 and remove the device from the JBOD. That could take a while.

I assume /dev/nvme0n1p3 was a swap disk:

sudo mkswap /dev/nvme1n1p3

Now you need to adjust the fstab:

sudo nano /mnt/btrfs-root/@/etc/fstab

Check the UUID of the efi partition and the swap partition in sudo blkid and correct it.

Now you are done. You “cloned” it, or better “moved” it…

Hope that helps :wink:

2 Likes

Thank you AK and Megavolt for your suggestions.
Mega’s will give me a valid boot scenario out of the box I guess now. I am from the careful side and I still have some time to think about how I’ll do it.
Parts of my thoughts is it is still in luks container so as long as I’ll not open I can dd it to an external file and from there dd it to the new disk. Maybe physically detach the old one and see if I can boot from the new one. (fstab or grub entries or config should be fine untouched then as well.) Then again from live system re-format the old if everything is smooth.
Thank you AK for thinking together with me.

About your quote above, I didn’t get why I’d make one out of 2 SSDs. And I haven’t looked into RAID but this is some kind of btrfs-glue and not RAID itself. Since btrfs can do that by default. Don’t think that’s advantageous for me.

Edit oops now I get the above quote part too :stuck_out_tongue:

This will copy/move all data 1:1 to /dev/nvme1n1p2 and remove the device from the JBOD. That could take a while.

Btrfs can do JBOD(RAID0) or RAID1, even RAID10 (as you wish, and have space(=devices))

https://btrfs.readthedocs.io/en/latest/Volume-management.html
There’s some similarity with traditional RAID levels, but this could be confusing to users familiar with the traditional meaning. Due to the similarity, the RAID terminology is widely used in the documentation (of btrfs). See mkfs.btrfs(8) for more details and the exact profile capabilities and constraints.

Wiki Btrfs RAID

Wikipedia says: Btrfs supports RAID 0, RAID 1 and RAID 10 (RAID 5 and 6 are under development).[45][46]

2 Likes

RAID0 != JBOD

JBOD glue disks to together and ignores the size. It will just pretend that all drives are a single disk. So it is just expanding the file system. For example LVM does the same.

RAID0 glue disks to together, they have to be the same size (otherwise multi-profile) and read/write data on both disks equally, but don’t mirror the data like RAID1.

So please don’t equal both.

1 Like

I did the dd from an image but it would not boot (same error people got with the Dec 23 update where one had to reinstall Grub manually). So… I decided to give @Megavaults 's tutorial a try. Worked out fine so far, but…
I can not format the actually free 1.5 TB on my new disk. They are not shown as allocated either. They (1.5 TB) are shown as free space in gnome-disk-utility, but trying to format them does not work, it’s just restricted to the 2.48 MB.
What can I do about it? I can also not resize the current partitions more than these 2.48 MB plus

.
EDIT, I got rid of that problem by editing last-lba: 3907029134 to the file partition_table.

But then I followed Megavault’s the tutorial until I encountered errors.

sudo sfdisk /dev/nvme1n1 < partition_table                                                            ✔ 
Checking that no-one is using this disk right now ... OK

Disk /dev/nvme1n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: T-FORCE TM8FPZ002T                      
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: FCBD9825-CFB1-4AB3-9862-47BDD762EEE8

Old situation:

>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new GPT disklabel (GUID: 77DB9EC8-1882-CA43-87B8-23615F773A6D).
/dev/nvme1n1p1: Created a new partition 1 of type 'EFI System' and of size 300 MiB.
Partition #1 contains a vfat signature.
/dev/nvme1n1p2: Created a new partition 2 of type 'Linux filesystem' and of size 396.6 GiB.
Partition #2 contains a crypto_LUKS signature.
/dev/nvme1n1p3: Created a new partition 3 of type 'Linux filesystem' and of size 68.9 GiB.
Partition #3 contains a crypto_LUKS signature.
/dev/nvme1n1p4: Done.

New situation:
Disklabel type: gpt
Disk identifier: 77DB9EC8-1882-CA43-87B8-23615F773A6D

Device             Start       End   Sectors   Size Type
/dev/nvme1n1p1      4096    618495    614400   300M EFI System
/dev/nvme1n1p2    618496 832262058 831643563 396.6G Linux filesystem
/dev/nvme1n1p3 832262059 976768064 144506006  68.9G Linux filesystem

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
    ~  sudo mkfs.vfat /dev/nvme1n1p1                                                                         ✔ 
mkfs.fat 4.2 (2021-01-31)
    ~  sudo mount -m /dev/nvme0n1p1 /mnt/efi-old                                                             ✔ 
    ~  sudo mount -m /dev/nvme1n1p1 /mnt/efi-new                                                             ✔ 
    ~  sudo cp -R /mnt/efi-old/* /mnt/efi-new                                                                ✔ 
    ~  sudo umount /mnt/efi-old                                                                              ✔ 
    ~  sudo umount /mnt/efi-new                                                                              ✔ 
    ~  sudo mount /dev/nvme0n1p2 /mnt/btrfs-root                                                             ✔ 
mount: /mnt/btrfs-root: unknown filesystem type 'crypto_LUKS'.
       dmesg(1) may have more information after failed mount system call.
    ~  sudo cryptsetup luksOpen /dev/nvme0n1p2 cryptroot                                                  32 ✘ 
Enter passphrase for /dev/nvme0n1p2: Error reading passphrase from terminal.
    ~  sudo cryptsetup luksClose cryptroot                                                          1 ✘  7s  
    ~  sudo cryptsetup luksOpen /dev/nvme0n1p2 cryptroot                                                     ✔ 
Enter passphrase for /dev/nvme0n1p2: 
    ~  sudo mount /dev/mapper/cryptroot /mnt/btrfs-root                                              ✔  14s  
    ~  sudo btrfs device add /dev/nvme1n1p2 /mnt/btrfs-root                                                  ✔ 
ERROR: /dev/nvme1n1p2 appears to contain an existing filesystem (crypto_LUKS)
ERROR: use the -f option to force overwrite of /dev/nvme1n1p2
    ~  sudo btrfs device add /dev/mapper/cryptroot /mnt/btrfs-root                                         1 ✘ 
ERROR: /dev/mapper/cryptroot appears to contain an existing filesystem (btrfs)
ERROR: use the -f option to force overwrite of /dev/mapper/cryptroot

Which of the last two commands should I use ?

sudo btrfs device add /dev/nvme1n1p2 /mnt/btrfs-root

or

sudo btrfs device add /dev/mapper/cryptroot /mnt/btrfs-root

You have to add the new partition to the old partition, not the way around. So you mount /dev/mapper/cryptroot to /mnt/btrfs-root and add /dev/nvme1n1p2 to /mnt/btrfs-root.

This looks correct:

Since the partition table is 1:1 copied here, btrfs assume it is luks container, So you yes, you need to overwrite the crypto_LUKS header on the new SSD by adding -f.

If you need the data to be still encrypted, then it becomes a bit complicated and I personally never done this. You even didn’t mention any encryption.

1 Like

In the man page it says device.

" add [-Kf] […]
Add device(s) to the filesystem identified by "
https://man7.org/linux/man-pages/man8/btrfs-device.8.html

Unfortunately I don’t know the difference between mapped device and device. All luks-encrypted disks get decrypted to /dev/mapper/DISK and, in Manjaro, mounted to /run/media/DISK.

I’m about to run

sudo btrfs device add -f /dev/nvme1n1p2 /mnt/btrfs-root

But then I want to understand before why it does not need to be sudo btrfs device add -f /dev/mapper/cryptroot /mnt/btrfs-root
/dev/mapper/cryptroot is where I decrypted it to…

OK, alert, where did I (try) to do that?? Sorry, don’t get it.

EDIT2 OK, insomnia…
“sudo btrfs device add -f /dev/mapper/cryptroot /mnt/btrfs-root” < with that Iwould have addded the old drive’s partition :slight_smile:
Reading is good, reading is fun!

But then then new nvme1n1p2 is not decrypted or mounted yet…
Should I do before I run the command?