BTRFS system disk won't work with TimeShift

In setup wizard, it says "Select BTRFS system disk with root subvolume (@). I select sdb3 which is type btrfs, 1TB. It is a newly formatted partition of an external drive and I hit Next. Then I get an error message, “Selected snapshot devide is not a system disk Select BTRFS system disk with root subvolume (@)”. How do I resolve this?

1 Like
BTRFS - OS installed on BTRFS volumes (with or without LUKS)

    Only Ubuntu-type layouts with @ and @home subvolumes are supported
    @ and @home subvolumes may be on same or different BTRFS volumes
    @ may be on BTRFS volume and /home may be mounted on non-BTRFS partition
    Other layouts are not supported

Are your subvolumes mounted as required?

sudo btrfs subvolume list /
sudo btrfs subvolume get-default /
1 Like

Timeshift supports BTRFS snapshots, but not backups. A snapshot is taken from the selected BTRFS partition into this partition, not across different partition. In your setup second is the case. I think your system is on another BTRFS partition as your 1TB BTRFS partition on sdb3. If you want to backup BTRFS snapshots then you have to use the combination of btrfs send | btrfs receive. BTRFS it self contains anything to take snapshots and backups to external storages. But Timeshift does’nt support it with BTRFS. Strongly speaking: Timeshift don’t support BTRFS backups, only snapshots on the same partition, and that’s not a real backup. And second: the btrfs snapshots taken with Timeshift are writeable snapshots to my knowledge, that’s a realy bad idea of a snapshot.
Thus i suggest to use another toolchain for btrfs snapshots and backups. Take a look onto follow combination:

  1. Snapper + Snapper GUI to take ro-snapshots, with automated cleanup algorithms
  2. snap-pac to automatically make snapshots with Snapper before and after installation of packages from repository and AUR
  3. grub-btrfs to automatically update GRUB with boot menuentries to boot into Snapper ro-snapshots
  4. snap-sync to automatically make differential backups of Snapper snapshots to external btrfs formated storages, through ssh-tunnels.
  5. right setup of BTRFS to support UEFI, root snapshots with included kernel + library (/var/lib) snapshots, bootable ro-snapshots of the entire system into DE/GUI.

I use above combination on 5 machines running Manjaro. One of them have attached a USB3 SATA SSD which holds the btrfs backup partition. Daily differential full system backups with SSH tunnels take about 5-10 seconds, and that’s the advantage of btrfs, if correctly used.


About Point 5, my setups are:

  1. install UEFI grub to /boot/efi mount point. In this case anything in folder /boot, except /boot/efi, remains to your @ = “root” subvolume. Mounting UEFI partition to /boot, as many times suggested in the WEB is wrong.
  2. use a flat btrfs subvolume layout for all system relevant subvolumes. Means @ → ‘/’, @home → ‘/home’ and @var → ‘/var’ have btrfs top-level-id = 5 as parent, thus btrfs filesystem root as parent.
  3. use a hierarchical subvolume layout for your snapshot subvolumes. (i personally prefer this way)
  4. configure Snapper to use two configurations, one “root” for @=/ root system. Another for @home = “/home” user folders. “root” use only “number” snapshots and cleanups, that means only snapshots are taken with snap-pac on installation/removing of software packages and on manual snapshots. “home” use the same but additional timeline snapshots. On my systems every hour for the last 12 hours, +1 per day for the last 7 days, +1 per week for the last 4 weeks, +1 per month for the last 12 months, +1 per year for the last 2 years. There isn’t much space needed for this on normal workloads.

About 2.

// btrfs su li -p /btrfs
ID 885 gen 21218 top level 5 path @
ID 884 gen 21218 top level 5 path @home
ID 260 gen 21218 top level 5 path @var
// @snapshots -> @snapshots/root + @snapshots/home snapper snapshots root subvolumes
ID 736 gen 17135 top level 5 path @snapshots
ID 737 gen 21170 top level 736 path @snapshots/root
ID 738 gen 21219 top level 736 path @snapshots/home


UUID=650E-2DAD /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro 0 2

# Swap
UUID=10b19077-7441-4ebe-bd44-b7f625e6410a       none                    swap    defaults                                                        0 0

# System
UUID=7969b227-dc81-4540-979e-cf037bc4cc8e       /                       btrfs   rw,noatime,compress-force=zstd:5,subvol=@                       0 0
UUID=7969b227-dc81-4540-979e-cf037bc4cc8e       /btrfs                  btrfs   rw,noatime,compress-force=zstd:5,subvolid=5                     0 0
UUID=7969b227-dc81-4540-979e-cf037bc4cc8e       /home                   btrfs   rw,noatime,compress-force=zstd:5,subvol=@home                   0 0
UUID=7969b227-dc81-4540-979e-cf037bc4cc8e       /var                    btrfs   rw,noatime,nodatacow,subvol=@var                                0 0
UUID=7969b227-dc81-4540-979e-cf037bc4cc8e       /.snapshots             btrfs   rw,noatime,compress-force=zstd:5,subvol=@snapshots/root         0 0
UUID=7969b227-dc81-4540-979e-cf037bc4cc8e       /home/.snapshots        btrfs   rw,noatime,compress-force=zstd:5,subvol=@snapshots/home         0 0

# /var/lib to /usr/var/lib on subvol=@
/usr/var/lib                                    /var/lib                none    defaults,bind                                                   0 0

# Backups
UUID=eeb5d15d-2095-4057-9c54-df80215ca2db      /media/Backups          btrfs   rw,noatime,compress-force=zstd:5,subvol=@backups                0 0

The bind mount /var/lib is a special case. We want that anything in folder /var/lib is contained into snapshots of “root”. But we have it excluded from “root” snapshots because we have a @var subvolume created. @var is a separate subvolume because most things into /var we don’t want to snapshoted and we want the +C attribute set on anything in /var. Thus we have to split /var/lib from remaining /var. I use for this case a bind mount of /usr/var/lib to /var/lib and copy the content from actual /var/lib to /usr/var/lib. We need this because anything in /var/lib could and have dependencies to our root system files. When we later want to boot into older readonly snapshots we need that the kernel into /boot, the libraries into /var/lib (especially the configurations of pacman/pamac installed packages) and the root system are consistent and represent a fully working system, as ro-snapshot.
On the other side, we have now @var separated and do not take snapshots of it. Thus if we boot into a ro-snapshot this @var (writeable) contains the actual last state of the system, eg. log-files, pacman/pamac caches and so on. On rollback of our system we see as administrator any least log-files for repair. And, by the way we can boot now our readonly snapshot into DE/GUI and not only to CLI, because /var is writable as needed by the DE . We run then on a readonly operation system.

This setup is easy and minimal with most usable features set. The only problematic trap is the right setup of Snapper it self. Snapper creates hie subvolumes on his own way that we dont need. The right steps to setup snapper are:

  1. create your btrfs layout, except subvolumes @snapshots, @snapshots/root and @snapshots/home. Don’t create mountpoints /.snapshots and /home/.snapshots
  2. install Snapper and create configurations for “root” → ‘/’ and “home” → /home
  3. delete Snappers created subvolumes /.snapshots and /home/.snapshots, and his create folders
  4. create mountpoint folder /.snapshots + /home/.snapshots
  5. create btrfs subvolumes @snapshots with parent = top level id=5 and @snapshots/root + @snapshots/home
  6. mount @snapshots/root → /.snapshots and @snapshots/home → /home/.snapshots

My setup of backup btrfs partition, contains a subvolume @backups. Then i install snap-sync and start it on the host machine (host have mounted this @backup subvolume on /media/Backups, or i use ssh tunnels to server machine with @backups mounted) Follow snap-sync and when it ask you to specify a relative path for the “root” and/or “home” configuration on /media/backups = @backups then input your hostname. Then after any backup made with snap-sync are stored in subvolume @backups/($hostname)/root/#/snapshot eg. @backups/($hostname)/home/#/snapshot. In this way you can backup different machines to the same backup storage and have a clean setup.
I use a separate @backup subvolume on this external drive because parallel on this @backup subvol on this drive i have installed with above btrfs setup a bootable Manjaro-System. Means: the backupdrive use exactly the same setup as any of my machines except it contains additional a @backups subvolume. Thus i can now attach this USB Drive on any of my machines, bootup this Backupsystem and can rollback/restore all backuped machines. You have to think about what errors can occur in the future and how you repair them, the goal of all this hard working :wink:

  1. easy small errors, like you have deleted a document into your home folder. Now you can open a file manager into DE and directly access the least snapshots taken and copy the older file back. Older documents you can access directly from your backups made.
  2. your system is destroyed by software updates. Now you reboot your machine and boot a older readonly snapshot with GRUB into full running DE/GUI and rollback your system to this state
  3. your system partition is destroyed or your system harddrive is destroyed. Now you attach your external USB harddrive and boot the installed Manjaro recovery system with included @backup snapshots of your machine and restore your system.


Remark: install Snapper-GUI to get comparable GUI as with Timeshift for Snapper. For snap-sync to work create a small bash script into /etc/cron.daily/snap-sync to get automatical daily backups. If using snap-sync with SSH tunnel you need to setup your SSH with use of SSH keyfiles and root access, actually. But on GIT the newest snap-sync release avoid the use of root SSH access. Please wait until snap-sync is updated into AUR, should be into next weeks done i think. I use for swap a separate partition. You can use swapfiles in btrfs but if you don’t use full system encryption i think is realy easier to use normal swap partitions.


Wow, the level of knowledge here is amazing, but I’m just a RE Broker and a Mortgage Lender who has tinkered with Linux for years and finally made the leap from Apple. I’m dual booting on my Macbook Pro and except for BT not connecting to my audio device things seem to be pretty good right now. I’ve learned through the school of hard knocks what to do and what not to to but I’m in no ways a programmer, which is the level of information you all are providing. Just way above my pay-grade and what you all have given me would be impossible for me to implement on my own. I think I’ll try the recommended Snapper-GUI next week and the RSYNC style of backup on Time Shift.

Thanks a lot for your detailed explanation, very useful. Couple questions:

  1. Why the steps 3 - 6 of deleting the btrfs snap subvolumes and manually creating them?
  2. Do you know if it’s also possible to do this with systemd-boot?
  3. Does my external backupdrive have to be btrfs formatted? I’d like to use a cloud service I can access with webdav or sftp

You understood it the other way around. Timeshift btrfs snapshots require OS to be installed on btrfs file system. Only then it can benefit from btrfs. This is what this message is telling you.

You can choose empty disc or partition with btrfs, but timeshift will only store files there, it won’t make any snapshots of the storage disk, so btrfs feature won’t be utilized.

About 1.
On creation of new configurations snapper want to create his own layout of subvolumes to store his snapshots. Assume we have for our root system created a subvolume named @home and mounted on ‘/home’. Then most times the parent of this new subvolume is our BTRFS root-filesystem = FS-ROOT, eg. Top-Level-ID=5. Snapper create on his own a new subvolume .snapshots at FS-ROOT/@home/.snapshots. Thats called a nested subvolume because his parent is our @home subvolume. Such btrfs layout is called a nested or hirachical layout. We can do this but in common sense it should’nt the prefered way because it does complicate some things, not only the understanding from common users. It would be better to use the so called flat btrfs layout. On this type any subvolume have his parent set to FS-ROOT = Top-Level-ID=5. Now a @snapshot subvolume to store our snapshots made, as example from @home would not nested into @home, instead into @snapshots outside of subvolume @home. This layout make it more easier to rollback later and is’nt dependend from a software rollback like snapper. To rollback by hand you have basicaly only to delete the damaged subvolume and make a snapshot of a snapshot as new subvolume with same name. With nested subvolumes you destroy the dependency of the snapshots taken, or you have as additiional step of rollback move any snapshot to the new subvolume we have rollback to.

But, snapper want to make his own style, and he dont accept anything others. Suppose u have already created a subvolume /home/.snapshots and want to create with snapper a config for /home, then snapper raise an error. Suppose you had a config “home” for /home created, chaned this layout and now you want to delete this config with snapper, then snapper raises an error again. Snapper want his own btrfs style other wise it wont work for you, it raises errors.

Thus, to get snapper working we have to go with the correct timing when and how we have to configure our needed btrfs layout.

  1. let snapper create his config and subvolumes
  2. delete his nested subvolumes
  3. create our own subvolumes
  4. make mountpoint to let snapper think all is ready

About 2.
No, and to my knowledge not possible. I think only GRUB2 is capable to boot a BTRFS formated partition. The problem is follow: we mount our EFI FAT32 Partition to /boot/efi. Anything in /boot except this folder remains to our root-system subvolume @ mounted at ‘/’. Thus if we make a snapshot of ‘/’ we include anything into /boot, our installed initramfs and kernels to boot. Now we want that our bootloader is capable to boot our kernel/initramfs and thus the bootloader must be read a BTRFS filesystem. And the bootloader must be configurable to boot ro-snapshots. Last point is’nt realy needed be, but if you want the full potential of btrfs then we need a bootloader that can boot our ro-snapshots with the snapshotted kernels/initramfs.
Each property of our btrfs-setup, snapshots, auto-generation of GRUB2 bootentries of our snapshots with grub-btrfs, auto-generation of snapshots before/after installation of new packages from repository/AUR with snap-pac, booting ro-snapshots with GRUB2 into desktop-environment (we run then a readonly-operationsystem with DE) and finally auto-generation of differential backups to external btrfs-formated drives with snap-sync, each point into this chain is a multiplicator that let give us a btrfs system thats nicely stable, and if not, easily recoverable from any failure.
The dependencies are: snap-pac, snap-sync and grub-btrfs need snapper, grub and btrfs.

About 3.
No. you can even use btrfs backups like rsync does. But you should realy consider to use btrfs here. Because you miss one of above muliplicators to reach full advantage. As example my daily full system backup (/ and /home, except /var) from one machine to another with ssh tunnel over wlan to an attached USB-SATA3 SSD need 5-15 seconds in average. The last big update of Manjaro with 1.5GiB changed system files needed about 10 minutes in background. Such timesaving can’t be reach with rsync or other backup systems. Another advantage are the point how such backups are made. First a second ro-snapshot is made of our system in realtime. You can now use your system like you want, the state freezed into this snapshot is the reference we make now in background a backup of. This feature of btrfs is called atomic. Whit rsync you can get here a problem because rsync works on each file indepentently and need time. So long as rsync is working on his beackup there exsists a chance that you modify some files. With btrfs snapshots thats impossible. Thats one of the advantages of btrfs and thats why facebook or google use it.

Timeshift is a nice tool, but with btrfs not the best choice. We want safety and taking writeable snapshots can’t be the right way for this goal. We need ro-snapshots, even as root administrator you have only ro-acces to snaphots, and some “linux-virues-worms-encryption-trojans” see only readonly-snapshots :wink: Our documents are safe.

1 Like

Thanks for your explanation, makes a lot of sense. I just finished my setup following your explanation, this is what my subvolumes look like now, that’s correct right?

ID 256 gen 6332 parent 5 top level 5 path @
ID 257 gen 6332 parent 5 top level 5 path @home
ID 258 gen 8 parent 5 top level 5 path @cache
ID 264 gen 16 parent 256 top level 256 path usr/var/lib/portables
ID 265 gen 17 parent 256 top level 256 path usr/var/lib/machines
ID 348 gen 6052 parent 5 top level 5 path @snapshots
ID 349 gen 6293 parent 348 top level 348 path @snapshots/root
ID 350 gen 6305 parent 348 top level 348 path @snapshots/home
ID 351 gen 6066 parent 350 top level 350 path @snapshots/home/1/snapshot
ID 352 gen 6068 parent 349 top level 349 path @snapshots/root/1/snapshot
ID 354 gen 6332 parent 5 top level 5 path @var
ID 364 gen 6137 parent 350 top level 350 path @snapshots/home/2/snapshot
ID 365 gen 6138 parent 349 top level 349 path @snapshots/root/2/snapshot
ID 369 gen 6289 parent 350 top level 350 path @snapshots/home/3/snapshot
ID 370 gen 6290 parent 349 top level 349 path @snapshots/root/3/snapshot
ID 371 gen 6292 parent 349 top level 349 path @snapshots/root/4/snapshot
ID 372 gen 6295 parent 350 top level 350 path @snapshots/home/4/snapshot

I need to read up on the documentation to check how to roll back and revert changes. It is also possible to revert changes by copying files back from the snapshots directly, or do you need to use the snapper command?

Edit: Also, how do you document your snapshots? with just having a bunch of random snapshots its hard to say what happened when. I noticed that in the Snapper GUI you can easily change the description so I could just change it there to whatever event happened. Is that the right way to go about it?

Yes thats nearly right :wink:

The subvolumes usr/var/lib/portables and usr/var/lib/machines are automatical generated, i don’t know why, there almost empty. To avoid them

sudo btrfs su de /usr/var/lib/machines
sudo btrfs su de /usr/var/lib/portables
sudo mkdir /usr/var/lib/machines
sudo mkdir /usr/var/lib/portables
sudo chmod 700 /usr/var/lib/machines
sudo chmod 700 /usr/var/lib/portables

Yes you can directly access any file in a ro-snapshots, copy them to our work-subvolume. Thats the smallest “rollback” you can do :wink: And for me the most often case, when i (or my wife) deleted some files in my home folder.

Yes the description can be changed by you. Because i use snap-pac and snap-sync i let they description such as is. Only my hand made own snapshots i give my description. Like

snapper -c root create -c "single" -d "before kernel test" -u "important=yes"

Rollbacks i made follow:

  1. i boot with grub a ro-Snapshot into DE
  2. i test in this rreadonly running system if anything works, such long as it can be into a ro-operating-system, not anything is possible ofcourse.
  3. then i retrieve the mount path of this ro-snapshot with
mount | grep btrfs
  1. i get something as /@snapshots/root/15/snapshot
  2. i rollback
sudo mv /btrfs/@ /btrfs/@snapshots/root/old-damaged-root
sudo btrfs subvolume snapshot /btrfs/@snapshots/root/15/snapshot /btrfs/@
  1. i reboot

suppose you have into your fstab subvolid=5 mounted to /btrfs folder. I use most often this access trough /btrfs folder to avoid some problems with right placements/path on creation/moving subvolumes/snapshots. Means, i try to use only absolute path into my btrfs subvolumes/snapshots.

I make not often snapshots by hand. I let automatical made snapshots by snapper, snap-pac and snap-sync. Each of these tools creates his own destription, like

// snapper -c home list
524  | pre    |          | Di 09 Feb 2021 12:40:49 CET | root     | number     | /usr/bin/pamac-daemon                                                       |                                                                            
525  | post   |      524 | Di 09 Feb 2021 12:42:58 CET | root     | number     | alsa-card-profiles amd-ucode appstream appstream-qt archlinux-appstream-... |                                                                            
553  | single |          | Mi 10 Feb 2021 00:01:01 CET | root     | timeline   | timeline                                                                    |                                                                            
572  | pre    |          | Mi 10 Feb 2021 17:49:40 CET | root     | number     | /usr/bin/pamac-daemon                                                       |                                                                            
573  | post   |      572 | Mi 10 Feb 2021 17:50:09 CET | root     | number     | linux510 linux510-virtualbox-guest-modules linux510-virtualbox-host-modu... |                                                                            
581  | single |          | Do 11 Feb 2021 00:01:01 CET | root     | timeline   | timeline                                                                    |                                                                            
641  | single |          | Sa 13 Feb 2021 03:01:01 CET | root     | timeline   | timeline                                                                    |                                                                            
642  | single |          | Sa 13 Feb 2021 03:29:02 CET | root     |            | latest incremental backup                                                   | backupdir=HR-NUCi5, subvolid=287, uuid=d49e1730-5137-473c-8e28-a76cf14e9830

As u can see, here are timeline snapshots of snapper, backup snapshots of snap-sync and pre/post snapshots made by snap-pac before/after installation of some packages.

1 Like

Ahhh okay, nice. Your explanation has helped a lot in understanding the intricacies of snapper :)! I’ll go play with it for a bit to get familiar with it.

How do snapshots add up size wise? Currently my whole system takes up 90gb, wondering how much snapshots would take up

Ok, here my shortened configs of “home” and “root” of snapper:

// cat /etc/snapper/configs/root


# run daily number cleanup

// cat /etc/snapper/configs/home





  1. ALLOW_GROUPS=“wheel”, admins can use snapper without sudo
  2. root only number cleanup, no timeline snapshots. means for ‘/’ only snapshots created by snap-pac and snap-sync, eg. on install/deinstall of software and on backups, maximal 32 such snapshots are actual, more are autom. deleted by snapper
  3. home the same and additional timeline snapshots
  • one every hour of last 12 hours
  • one every day of last 7 days
  • one ever week of last 4 weeks
  • one every month last 12 months
  • one per year last 2 years

Thats not much over time.

Then my grub config for grub-btrfs

// /etc/default/grub

Above three lines are needed for grub-btrfs → grub-btrfs.path service to create menuentries into grub boot menu for snapshots. 32 Snapshots are displayed as menuentries.

// /etc/default/grub
# If you want to enable the save default function, uncomment the following
# line, and set GRUB_DEFAULT to saved.

GRUB_SAVEDEFAULT=false or commented out because GRUB can’t save his environment in BTRFS.


Good question, as example:

My backup SSD is 900GiB great. It contains a full bootable Manjaro Setup wich i can boot to recover all my 3 other computers. This setup is identical to all my machines, except the case that a @backups subvolume is added. This @backups subvolume contains all my backups of my 3 machines over last 2 months, each day one for each machine.

Actualy it is 10% full, eg. about 90GiB. If i compute the space needed with Dolpin i get 1.3TiB on filesizes.

I use actualy as mount option of my btrfs subvolumes “compress-force=zstd:5”

# Swap
UUID=06686a06-c069-49f7-86e4-7a962740b364       none                    swap            defaults                                                                        0 0

UUID=447C-E2BC                                  /boot/efi               vfat            noatime,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8                   0 2

# System
UUID=26c8751d-2747-4a4d-b857-32c82d67b20a       /btrfs                  btrfs           noatime,ssd,compress-force=zstd:5,subvolid=5                                    0 0
UUID=26c8751d-2747-4a4d-b857-32c82d67b20a       /                       btrfs           noatime,ssd,compress-force=zstd:5,subvol=@                                      0 0
UUID=26c8751d-2747-4a4d-b857-32c82d67b20a       /home                   btrfs           noatime,ssd,compress-force=zstd:5,subvol=@home                                  0 0
UUID=26c8751d-2747-4a4d-b857-32c82d67b20a       /var                    btrfs           noatime,ssd,nodatacow,subvol=@var                                               0 0
UUID=26c8751d-2747-4a4d-b857-32c82d67b20a       /.snapshots             btrfs           noatime,ssd,compress-force=zstd:5,subvol=@snapshots/root                        0 0
UUID=26c8751d-2747-4a4d-b857-32c82d67b20a       /home/.snapshots        btrfs           noatime,ssd,compress-force=zstd:5,subvol=@snapshots/home                        0 0

# var/lib mount into subvol=@
/usr/var/lib                                    /var/lib                none            defaults,bind                                                                   0 0

# Backup
UUID=d49e1730-5137-473c-8e28-a76cf14e9830       /media/Backups          btrfs           nofail,noatime,ssd,compress-force=zstd:5,subvol=@backups                        0 0

The compress-force give you some more compression-ratio. You can try “compsize” to let calculate the real compression ratio, actually i see on 5 machines about 58%, means 42% reduction in size. Please aware “compsize” need some time to calculate.

I use zstd:5 instead of zstd:3 the default compression level, because on my daily driver i use a M.2 NVME SSD with 3TB/sec read/write performance. On my test the sweetpoint that i copuld accepted was zstd:5. I get about same performance compared to a ordinary SATA3.2 SSD with 570MB/sec, thats good for me.

1 Like

One thing to consider, you have to check that on /var and any subfolder/file, except /var/lib → /usr/var/lib the +C attr is set. Thats important:

  • anything in /var must be disabled CoW because if not the btrfs CoW feature could fast expand the needed space
  • never make snapshots of such folders/subvolumes because with disabled CoW → +C set, a snapshots take much more space, anything is then copied, instead referenced
  • /usr/var/lib must be CoW, means not +C set, because we include it into snapshots of @root → ‘/’ thus we need then CoW of /usr/var/lib files to avoid copying of files on every snapshot for this folder.

Thats a drawback, but we have to include anyting into /var/lib into snapshots of @root to avoid breaking the system. We want, especialy after some time to boot into ro-snapshots into a clean working system. I had already this problem at my beginning with btrfs. My old setup excluded anything in /var, even /var/lib. After the last big Manjaro upgrade, where kernel 5.8 was EOL and deinstalled i get some problems to boot older snapshots with kernel 5.8 included. Now, with this newer setup i tried it explicitely and included into a snapshot an older kernel. Then i deinstalled this old kernel on the actual system and bootet into this older-kernel-snapshot to see what could be happend. It runs without problems, even with now deinstalled kernel because anything was included into this old-kernel-snapshot.

1 Like

Ahh thanks for the configs, got some nice things out of there.

The Manjaro Architect automatically set my mount options to rw,noatime,compress=lzo,ssd,space_cache,commit=120, should I change the compress arg to what you have? That sounds good. Also using an NVME like that.

For the +C, options, I wasn’t sure what to do, so I ran this from my secondary OS:

sudo chattr -R +c /mnt/@var/
sudo btrfs filesystem defragment -r -c /mnt/@var

Is that all there’s to it?

I would change it to:

  • i get with compress-force higher overall compression ratio and on some speedtest with a set of 32GiB common small files i get higher troughput, because btrfs don’t try to comress and abort if the first blocks of a file are not compressible. Instead it compress and delegate any troghput always to zstd compression.
  • only ssd as mountoption if a ssd is used. The space_cache, space_cache=v1, space_cache=v2 is selected automatical from btrfs dependend of the setup seen. Normaly you don’t want to provide sdd, because btrfs should detect it rightly. But, thats not always the case. My external Backup Drive is a SSD, connected through USB3->SATA3 Adapter, and not correctly as ssd detected. Now, when i want to choose ssd and space_cache then btrfs don’t accept space_cache=v1 only space_cache=v2 and thats not so good. Lesson: i provide the minimal information and let make the remaining parts the operation system…
  • then as above seen i use for /var mount the nodatacow option. Actualy that’s ignored by btrfs. Btrfs use for all mounts on same partition the first moint option seen in fstab. Thats not right on view of a administrator. Thus i use the nodatacow option in fstab, knew it’s ignored and hope it would be supported in near future. So long i setup +C attr,

Perfectly right, even the defragementation is needed. Look ar

ls /btrfs/@var/lib

If u have remaining files from copying to /usr/var/lib, if so u can delete these now For this copying i normaly use rsync to get all links, attributes,accessrights and so on.

I setup my system by cloning. Means partitioning, btrfs volume setup, mounting to /mnt, setup +C attr and /usr/var/lib monts, rsync old system → new system, setup FSTAB, chroot, init swap, change hostname, , install grub UEFI and BIOS bootloader evtl. mhwd etc.pp. exit, reboot. Finished a clone in about 10-15 minutes and have a system with all my tools/configurations as i want. Thus by this way i clone even snapper configuration and i setup a empty /var with right attributes set before any file is copied to.

And now:
how complicated was it for you? How long u needed? What info’s ware missed and u had to search into WEB?

It wasn’t extremely difficult, but I did spend a few hours researching and googling to set everything up. I used your whole setup methodology, but had to google which commands to use to set everything up as that was quite a puzzle at some times. What took the longest was to realize that to create the level 5 subvolumes I have to do that from another OS and not directly from my running OS (lol). So next time will most definitely be faster.

So it was fine for me to follow but I get that a non-techy wouldn’t follow along with all the details you described.

In the end I now have a very robust snapshot setup thanks to your explanations, it’s greatly appreciated.

Edit; a setup question
How does snapper deal with other mounted volumes? I have and extra internal ssd, currently Ext4 that I mount to /media/0x02, and have a sym link to /home/user/0x02. Do they automatically get excluded from backups?

In hindsight this would be better lol, I just used sudo cp -r /mnt/@var/lib/* /mnt/@/usr/var/lib/