Still dealing with the after effects of a btrfs Timeshift restore

About a month ago I used the Timeshift restore feature, which I was hesitant to use, but I had a crude backup in case anything broke. But it yielded very unexpected results, at least for me.

The TLDR is I’m not booting from the top level btrfs subvolume anymore. I thought everything was fixed, so grub was booting with just the mount options -o @ or -o @home. But at the lower brtfs level, I’m currently booting and running on subvolid 455 which has a parent volume.

From the post I made then:

So the restore basically set everything to boot a read/write snapshot after using the restore. @Zesko was amazing in helping me get everything booting back to mounting my btrfs drive back booting to my @ subvolume. (And not the timeshift-btrfs/snapshots/####-##-##_##-##-##/@ volume/snapshot)

Here is some background since the previous post…

I’ve been spending a lot of time getting a lot more familiar with btrfs, and have been working on a backup solution. One where I can live boot and restore to a point in time from a different physical volume. So I have been researching and looking at all the utilities to help with this procedure. I knew I wanted to take advantage of the btrfs send/receive functionality, so btrbk and Snapper are still at the top of the list. But it still made me realise I need to do this all manually before jumping into any one of these.

So I did a bunch of playing around in QEMU/KVM with a Manjaro brtfs install. I able to:

  1. Backup the GPT data with sgdisk
  2. Backup the EFI partition
  3. Make a Timeshift snapshot, set it to read only
  4. Btrfs send the snapshot to another brtfs virtual disk
  5. Make a bunch of changes
  6. Make another Timeshift snapshot, set it to read only
  7. Btrfs send the incremental data to the separate disk to another subvolume

And from this I was able to wipe everything off the boot disk, boot a Manjaro live image, and restore to the latest snapshot in just a few steps.

When I went back to mucking around with my main Manjaro PC to at least get a manual backup in place, is when I first noticed something wasn’t right. It’s when I started trying to find where all the exclusive space was among my volumes. At first I was thinking, does it matter it’s not subvolid 5 anymore, and it has a parent volume? One of the utilities I was experimenting with won’t even work unless it’s the top level subvolume.

From it’s documentation:

Note that the indicated path must be to the top-level subvolume (otherwise btdu will be unable to open other subvolumes for inode resolution). If in doubt, mount the filesystem to a new mountpoint with -o subvol=/,subvolid=5

Do I restore my backup to a proper top level subvolume? Or is there something simpler that I’m missing? I mean, btrfs is versatile. Everything is still technically working fine, but I know this is bugging me.

If you have made it this far, here is some output of some commands to give you an idea of what state my PC is in now, to give you an idea.

btrfs sub show /
@
        Name:                   @
        UUID:                   627dcc2f-c60c-e341-a12c-97e891e963d7
        Parent UUID:            c2346068-2ffe-5748-b5b1-c73c276f5aa3
        Received UUID:          -
        Creation time:          2022-10-19 07:30:18 -0600
        Subvolume ID:           455
        Generation:             410580
        Gen at creation:        357911
        Parent ID:              5
        Top level ID:           5
        Flags:                  -
        Send transid:           0
        Send time:              2022-10-19 07:30:18 -0600
        Receive transid:        0
        Receive time:           -

At the time of using Timeshift restore, it created the 2022-10-19_07-30-18 snapshot within the 2022-10-15_23-44-29 snapshot (The Timeshift comment in info.json says “Before restoring '2022-10-15_23-44-29”).

This 2022-10-15_23-44-29 or UUID=c2346068-2ffe-5748-b5b1-c73c276f5aa3 volume is the parent volume for what I am currently booting from.

find . -maxdepth 2 -name \@ -exec btrfs sub show {} \; | grep -E 'snapshots|UUID' | grep -v Received

timeshift-btrfs/snapshots/2022-10-06_20-09-10/@
        UUID:                   5284573c-0e81-1c49-8101-7f14dd3d2c54
        Parent UUID:            00518fca-f877-bf4f-9ccf-7ca9333ade24
###
### (deleted redundant snapshots)
###
timeshift-btrfs/snapshots/2022-10-11_20-14-24/@
        UUID:                   a8d457df-9c47-a04a-9f62-af719b4ce73f
        Parent UUID:            00518fca-f877-bf4f-9ccf-7ca9333ade24
timeshift-btrfs/snapshots/2022-10-15_23-44-29/@
        UUID:                   c2346068-2ffe-5748-b5b1-c73c276f5aa3
        Parent UUID:            00518fca-f877-bf4f-9ccf-7ca9333ade24
        Snapshots:
                                timeshift-btrfs/snapshots/2022-10-19_07-30-18/@
timeshift-btrfs/snapshots/2022-10-19_07-30-18/@
        UUID:                   5b783c28-dd13-ec4e-a571-969cb03439b1
        Parent UUID:            c2346068-2ffe-5748-b5b1-c73c276f5aa3
###
### (deleted redundant snapshots)
###
timeshift-btrfs/snapshots/2022-11-18_11-28-40/@
        UUID:                   914e686e-8531-7d49-8535-e8545a53a17c
        Parent UUID:            627dcc2f-c60c-e341-a12c-97e891e963d7

Just some notes:

  1. Timeshift is not intended to be a backup tool. It is a system recovery tool, like “Windows Recovery”. By design, it is not made to restore a backup of a previous installation to a new installation. Your use case here with timeshift is uncommon.
  2. Usually it is possible (but not intended) to send a timeshift (btrfs) snapshot on a new partition and call it @ plus all other subvolumes which exists in fstab. Then chroot into it and reinstall/update grub.
1 Like

I realise that. A few months ago, I was pretty new to using btrfs, so I figured I’d give it a go this time around. I wrongly assumed that a Timeshift restore would work similar to a Windows System Restore, or reverting to a previous state similar to a ZFS snapshot.

Timeshift, GUI or command line, has three options that make any changes: Create, delete, restore. You would think that the latter would be a common use case.

The backup portion was a completely separate thing I was working on changing/migrating to while I was learning more about btrfs, as it was my first time using it as a boot volume.

In the previous thread above, I was able to do that with some help, without the need to move any data around. But I realised it just kind of hid the mess it made after the fact.

I know what I can do to get everything back to normal, which is essentially blowing over my entire boot drive. I guess I’m just failing to even find the reasoning of the the why the restore feature of Timeshift works the way it does (using btrfs). I’m starting to think Timeshift btrfs support was a feature implemented well after it was designed with only rsync in mind? I’ve spent hundreds of hours learning more about btrfs over the last month, and actually enjoyed most of it.

I thought I would just reach out here first, just in case I’m just missing something.

Yes, it has a parent volume. If any subvolume has its top-level ID greater than 5, it is called a sub-subvolume or snapshot (for Snapper default layout unlike Timeshift layout)
I think all Timeshift snapshots have the same top-level ID 5, because they use Btrfs flat layout.

Your system should be running at top-level ID 5 by default.

When restoring your snapshot with any different top-level ID, btrfs will “copy” (relink) it to a new subvolume with top-level ID 5 by default.
See: How to restore a snapshot via CLI without using GUI. Timeshift does the same.

# btrfs subvolume snapshot {@Your_snapshot}  {@Your_subvolume}

Now the @Your_subvolume is automatically on top-level ID 5. You don’t need to think much.


Note:
For Btrfs: a snapshot is a subvolume.
For ZFS: a snapshot is not a dataset, but it is in the dataset.

Btrfs snapshot is more flexibel than ZFS snapshot. You can move/switch/replace btrfs snapshots/subvolumes as they are all in the B-Tree. You can restore any old snapshot without destroying new snapshots.

If you know git well, from my point of view the comparison between git and zfs/btrfs would be:

  • Btrfs snapshot/subvolume is like git branch behavior.
  • Zfs snapshot is like git commit behavior.
    Zfs dataset is like git branch behavior.

A snapshot is nearly the same as a subvolume.

A readonly snapshot (default)

  • can not be altered
  • can not be moved
  • can be used as source for btrfs-send

A writable snapshot is indeed the same as a subvolume

  • can be altered (like any subvolume)
  • can be moved inside the same btrfs-volume
  • can not be used as source of btrfs-send
Only by convention:

Most subvolumes start with an @ in the name, and most snapshots do not.

https://btrfs.readthedocs.io/en/latest/Subvolumes.html

https://btrfs.readthedocs.io/en/latest/btrfs-subvolume.html#subvolume-and-snapshot

:footprints: