On Creating a Clone of my NVME Drive with Rescuezilla

I want to create a clone of my existing NVME drive as a backup using Rescuezilla. I have another NVME drive of the same storage specs as my primary drive. I had a few questions before I begin.

As I understand the process, this should be as easy as choosing the “Clone” option and property specifying the “source” and “destination” drives. Is this correct?

When the cloning operation is complete, my understanding is that the “destination” drive should have the same partition layout and data content, including the same UUID. Is this correct?

If so, when I reboot, I should make sure the duplicated “destination” drive is fully disconnected so as not to cause any confusion when booting.

Finally, supposed my primary drive explodes and I need to use the backup drive. Can I simply remove the old drive and install the backup drive and proceed?

My current drive information is as follows (I use BTRFS as the file system):

My sudo fdisk -l output

Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WD_BLACK SN770 1TB                      
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 15860E88-442C-4E12-A980-4554C2486B2F

Device          Start        End    Sectors   Size Type
/dev/nvme0n1p1   4096     618495     614400   300M EFI System
/dev/nvme0n1p2 618496 1953520064 1952901569 931.2G Linux filesystem


Disk /dev/mapper/luks-2a9099ad-5c2d-43f8-bd02-80c790d1cdfc: 931.21 GiB, 999883506176 bytes, 1952897473 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop0: 4 KiB, 4096 bytes, 8 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 164.82 MiB, 172830720 bytes, 337560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 55.36 MiB, 58052608 bytes, 113384 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop3: 91.69 MiB, 96141312 bytes, 187776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop4: 44.3 MiB, 46448640 bytes, 90720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop5: 44.44 MiB, 46596096 bytes, 91008 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

My sudo blkid output:

/dev/loop1: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/nvme0n1p1: UUID="65C6-DD33" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="fe0fc278-7b72-46e2-bbcb-e139a9df2fd6"
/dev/nvme0n1p2: UUID="2a9099ad-5c2d-43f8-bd02-80c790d1cdfc" TYPE="crypto_LUKS" PARTLABEL="root" PARTUUID="cff11479-a38a-4ccb-9c1a-f0908bf38d89"
/dev/loop4: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop2: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop0: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/mapper/luks-2a9099ad-5c2d-43f8-bd02-80c790d1cdfc: UUID="4a0cc8d5-31af-455f-ae55-faadde0e3603" UUID_SUB="f30009e0-c7db-4725-a6c2-b1f4c9f8b200" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/loop5: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop3: BLOCK_SIZE="131072" TYPE="squashfs"

Be sure to do all this while using an actual kernel (6.12 LTS) !

If you want to be sure, you have to try it out :slight_smile:

The worst that can happen is that your BIOS/UEFI notices that it is not the same drive. In this worst case scenario, you then have to boot through the BIOS and set the boot entry for the new hard drive as the default.

Theoretically yes, but you should take a look at the rescuezilla manual. Especially what it says about cloning btrfs.
:footprints:

Good advice. I found this in their github:

Rescuezilla v2.5 (2024-05-12)

  • Adds release based on Ubuntu 24.04 (Noble), Ubuntu 23.10 (Mantic) and Ubuntu 23.04 (Lunar) for best support of new hardware
  • Upgrades to latest partclone release v0.3.27 (released October 2023) from v0.3.20 (which was released in April 2022)
    • This should improve issues with BTRFS filesystems, as it supports BTRFS v6.3.3, rather than v5.11 (#393)

On pure intuition and on the little I know of BTRFS, I’d rather clone the encrypted partition “as is”, rather than opening it and cloning the contents of it.
In that case I’d rather just use the tools that BTRFS itself provides to copy/duplicate the data.

… as @andreas85 said:

If you want to be sure, you have to try it out

1 Like

the encrypted partition.

My main Manjaro doesn’t have an encrypted environment, so I’m a bit interested.

I have replaced my main SSD/NVMe (multi-boot) about once every year or two. I have never had a cloning failure in a normal environment. I am a bit of a worrier, so I back up the entire NVMe disk to the HDD in advance.

If there was one problem with the case, it was that even the 1TB one was a little small in capacity.

His encrypted partition contains a BTRFS file system.
If your BTRFS is not in LUKS container you should probably use the native BTRFS tools to duplicate the contents - instead of cloning it with dd or however clonezilla does it behind the scenes.

1 Like