Snapper creates 2 snapshots when auto timed

Im just trying snapper, iv just installed it and read the man, set up a quick config but when it executes it creates 2 snapshots but i cant find why?

❱sudo snapper list

1 │ single │       │ Sat 15 Jun 2024 09:00:00 BST │ root │ timeline │ timeline    │
3 │ single │       │ Sat 15 Jun 2024 09:31:19 BST │ root │          │             │
4 │ single │       │ Sat 15 Jun 2024 10:00:01 BST │ root │ timeline │ timeline    │
5 │ single │       │ Sat 15 Jun 2024 10:01:00 BST │ root │ timeline │ timeline    │

It did make 2 at 09:00 and 09:01 but for some reason the 09:01 one has gone now but it still created a 10:00 and a 10:01 on this snapshot?

❱sudo journalctl -u snapper-timeline.service --since "2024-06-15 09:00" --until "2024-06-15 11:00"

Jun 15 09:00:00 greg-optiplex7050 systemd[1]: Started Timeline of Snapper Snapshots.
Jun 15 09:00:00 greg-optiplex7050 systemd-helper[482158]: Running timeline for 'root'.
Jun 15 09:00:00 greg-optiplex7050 systemd[1]: snapper-timeline.service: Deactivated successfully.
Jun 15 10:00:01 greg-optiplex7050 systemd[1]: Started Timeline of Snapper Snapshots.
Jun 15 10:00:01 greg-optiplex7050 systemd-helper[500021]: Running timeline for 'root'.
Jun 15 10:00:01 greg-optiplex7050 systemd[1]: snapper-timeline.service: Deactivated successfully.

I dont see anything in the logs.

I have the config like this:

❱sudo snapper -c root get-config
Key                      │ Value
─────────────────────────┼──────
ALLOW_GROUPS             │
ALLOW_USERS              │
BACKGROUND_COMPARISON    │ yes
EMPTY_PRE_POST_CLEANUP   │ yes
EMPTY_PRE_POST_MIN_AGE   │ 3600
FREE_LIMIT               │ 0.2
FSTYPE                   │ btrfs
NUMBER_CLEANUP           │ yes
NUMBER_LIMIT             │ 50
NUMBER_LIMIT_IMPORTANT   │ 10
NUMBER_MIN_AGE           │ 3600
QGROUP                   │
SPACE_LIMIT              │ 0.5
SUBVOLUME                │ /
SYNC_ACL                 │ no
TIMELINE_CLEANUP         │ yes
TIMELINE_CREATE          │ yes
TIMELINE_LIMIT_DAILY     │ 10
TIMELINE_LIMIT_HOURLY    │ 10
TIMELINE_LIMIT_MONTHLY   │ 10
TIMELINE_LIMIT_QUARTERLY │ 0
TIMELINE_LIMIT_WEEKLY    │ 0
TIMELINE_LIMIT_YEARLY    │ 10
TIMELINE_MIN_AGE         │ 3600

these are all my systemd timers

❱systemctl list-timers
NEXT                                  LEFT LAST                              PASSED UNIT                             ACTIVATES                         
Sat 2024-06-15 10:26:48 BST            11s Sat 2024-06-15 10:26:18 BST      18s ago check_bluetooth.timer            check_bluetooth.service
Sat 2024-06-15 11:00:00 BST          33min Sat 2024-06-15 10:00:01 BST    26min ago snapper-timeline.timer           snapper-timeline.service
Sun 2024-06-16 00:00:00 BST            13h Sat 2024-06-15 00:00:01 BST 5h 56min ago logrotate.timer                  logrotate.service
Sun 2024-06-16 00:00:00 BST            13h Sat 2024-06-15 00:00:01 BST 5h 56min ago shadow.timer                     shadow.service
Sun 2024-06-16 05:34:34 BST            19h Sat 2024-06-15 01:00:00 BST 4h 56min ago man-db.timer                     man-db.service
Sun 2024-06-16 06:49:49 BST            20h Sat 2024-06-15 02:19:52 BST 3h 36min ago systemd-tmpfiles-clean.timer     systemd-tmpfiles-clean.service
Sun 2024-06-16 08:51:05 BST            22h Sat 2024-06-15 08:01:05 BST 2h 25min ago updatedb.timer                   updatedb.service
Mon 2024-06-17 01:04:33 BST      1 day 14h Mon 2024-06-10 01:25:06 BST            - fstrim.timer                     fstrim.service
Thu 2024-06-20 08:24:05 BST         4 days Thu 2024-06-13 14:37:03 BST            - pamac-mirrorlist.timer           pamac-mirrorlist.service
Sun 2024-06-23 13:51:19 BST   1 week 1 day Wed 2024-06-12 09:40:24 BST            - archlinux-keyring-wkd-sync.timer archlinux-keyring-wkd-sync.service
Sat 2024-07-06 15:00:00 BST 3 weeks 0 days Wed 2024-06-05 12:29:30 BST            - pamac-cleancache.timer           pamac-cleancache.service
❱systemctl cat snapper-timeline.timer

# /usr/lib/systemd/system/snapper-timeline.timer

[Unit]
Description=Timeline of Snapper Snapshots
Documentation=man:snapper(8) man:snapper-configs(5)

[Timer]
OnCalendar=hourly

[Install]
WantedBy=timers.target

❱sudo journalctl _SYSTEMD_UNIT=snapper-timeline.service

Jun 15 09:00:00 greg-optiplex7050 systemd-helper[482158]: Running timeline for 'root'.
Jun 15 10:00:01 greg-optiplex7050 systemd-helper[500021]: Running timeline for 'root'.

there is nothing in my crontab or roots crontab.

So i dunno?

EDIT:

Yea, it did the same again, removed the 10:01 and added 11:00 and 11:01

❱sudo snapper list
# │ Type   │ Pre # │ Date                         │ User │ Cleanup  │ Description │ Userdata
──┼────────┼───────┼──────────────────────────────┼──────┼──────────┼─────────────┼─────────
0 │ single │       │                              │ root │          │ current     │
1 │ single │       │ Sat 15 Jun 2024 09:00:00 BST │ root │ timeline │ timeline    │
3 │ single │       │ Sat 15 Jun 2024 09:31:19 BST │ root │          │             │
4 │ single │       │ Sat 15 Jun 2024 10:00:01 BST │ root │ timeline │ timeline    │
6 │ single │       │ Sat 15 Jun 2024 11:00:00 BST │ root │ timeline │ timeline    │
7 │ single │       │ Sat 15 Jun 2024 11:01:00 BST │ root │ timeline │ timeline    │

Could you show your config-files for snapper ?
:footprints:

Are there other config files for snapper other than the ones i already posted?

❱sudo snapper -c root get-config
Key                      │ Value
─────────────────────────┼──────
ALLOW_GROUPS             │
ALLOW_USERS              │
BACKGROUND_COMPARISON    │ yes
EMPTY_PRE_POST_CLEANUP   │ yes
EMPTY_PRE_POST_MIN_AGE   │ 3600
FREE_LIMIT               │ 0.2
FSTYPE                   │ btrfs
NUMBER_CLEANUP           │ yes
NUMBER_LIMIT             │ 50
NUMBER_LIMIT_IMPORTANT   │ 10
NUMBER_MIN_AGE           │ 3600
QGROUP                   │
SPACE_LIMIT              │ 0.5
SUBVOLUME                │ /
SYNC_ACL                 │ no
TIMELINE_CLEANUP         │ yes
TIMELINE_CREATE          │ yes
TIMELINE_LIMIT_DAILY     │ 10
TIMELINE_LIMIT_HOURLY    │ 10
TIMELINE_LIMIT_MONTHLY   │ 10
TIMELINE_LIMIT_QUARTERLY │ 0
TIMELINE_LIMIT_WEEKLY    │ 0
TIMELINE_LIMIT_YEARLY    │ 10
TIMELINE_MIN_AGE         │ 3600

sorry im not sure where they would be.

edit:
as root do:

for F in /etc/snapper/configs/*;do echo $F;cat -n $F;done;

I gave up trying to run the code, its got spaces missing, has a semicolon at the end, does not work for a root directory, trying sudo at the beginning does not work either.

Regardless,

There is only i file in there

❱sudo ls -al /etc/snapper/configs/
total 4
drwxr-xr-x 1 root root    8 Jun 15 08:25 .
drwxr-xr-x 1 root root   14 Jun 15 08:20 ..
-rw-r----- 1 root root 1230 Jun 15 12:35 root

and iv shown the content of that.

❱sudo cat /etc/snapper/configs/root

# subvolume to snapshot
SUBVOLUME="/"

# filesystem type
FSTYPE="btrfs"


# btrfs qgroup for space aware cleanup algorithms
QGROUP=""


# fraction or absolute size of the filesystems space the snapshots may use
SPACE_LIMIT="0.5"

# fraction or absolute size of the filesystems space that should be free
FREE_LIMIT="0.2"


# users and groups allowed to work with config
ALLOW_USERS=""
ALLOW_GROUPS=""

# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# directory
SYNC_ACL="no"


# start comparing pre- and post-snapshot in background after creating
# post-snapshot
BACKGROUND_COMPARISON="no"


# run daily number cleanup
NUMBER_CLEANUP="yes"

# limit for number cleanup
NUMBER_MIN_AGE="3600"
NUMBER_LIMIT="50"
NUMBER_LIMIT_IMPORTANT="10"


# create hourly snapshots
TIMELINE_CREATE="yes"

# cleanup hourly snapshots after some time
TIMELINE_CLEANUP="yes"

# limits for timeline cleanup
TIMELINE_MIN_AGE="3600"
TIMELINE_LIMIT_HOURLY="1"
TIMELINE_LIMIT_DAILY="10"
TIMELINE_LIMIT_WEEKLY="0"
TIMELINE_LIMIT_MONTHLY="10"
TIMELINE_LIMIT_QUARTERLY="0"
TIMELINE_LIMIT_YEARLY="10"


# cleanup empty pre-post-pairs
EMPTY_PRE_POST_CLEANUP="yes"

# limits for empty pre-post-pair cleanup
EMPTY_PRE_POST_MIN_AGE="3600"

However, I have just found that if i stop timeshift from running all automation that it seems to stop the double creation of the snapshots.

But now im left with more questions, I have both systemd timers and cron timers installed alongside each other for snapper. They must have been installed by the package manager and although it seems to be all working (EDIT: The cleanup is not) im not sure why we have this setup and if i should just leave it be?

/etc/conf.d/snapper
/etc/cron.hourly/snapper
/etc/logrotate.d/snapper
/etc/systemd/system/timers.target.wants/snapper-timeline.timer -> /usr/lib/systemd/system/snapper-timeline.timer
/etc/systemd/system/snapper-timeline.timer.d
/etc/snapper

ahh, this is a mess. At the moment it is using systemd snapper-timeline.timer to fire the snapshots but its also trying to use /etc/cron.hourly/snapper to also take snapshots and also do the cleanup.
It fails to get past both of the “for-next” loops so it never does this crons snapshot or cleanup, even though i wouldn’t want it to do the snapshot twice but i do want it to clean up, which it has not been doing with testing so far??

❱cat /etc/cron.hourly/snapper
#!/bin/bash

LOGFILE="/var/log/snapper-cron.log"

for CONFIG in $SNAPPER_CONFIGS ; do

    TIMELINE_CREATE="no"

    . /etc/snapper/configs/$CONFIG

    if [ "$TIMELINE_CREATE" = "yes" ] ; then
        snapper --config=$CONFIG --quiet create --description="timeline" --cleanup-algorithm="timeline"
    fi

done

for CONFIG in $SNAPPER_CONFIGS ; do

    NUMBER_CLEANUP="no"
    TIMELINE_CLEANUP="no"
    EMPTY_PRE_POST_CLEANUP="no"

    . /etc/snapper/configs/$CONFIG

    if [ "$NUMBER_CLEANUP" = "yes" ] ; then
        snapper --config=$CONFIG --quiet cleanup number
    fi

    if [ "$TIMELINE_CLEANUP" = "yes" ] ; then
        snapper --config=$CONFIG --quiet cleanup timeline
    fi

    if [ "$EMPTY_PRE_POST_CLEANUP" = "yes" ] ; then
        snapper --config=$CONFIG --quiet cleanup empty-pre-post
    fi

done
  • Should i be trying to use just one of the timer types, i.e. cron OR systemd?
  • If so which?
  • If you say systemd then how do i get it to do the cleanup because it is not doing that automatically atm?
  • if you say crond, how do i get that script to go past the “for-next” loops because it wont atm?
  • Should i delete the other timer type?

You can´t mix timeshift with snapper :laughing:

With zsh and bash it runs on my pc and produces
for F in /etc/snapper/configs/*;do echo $F;cat -n $F;done;
/etc/snapper/configs/home
     1	
     2	# subvolume to snapshot
     3	SUBVOLUME="/home"
     4	
     5	# filesystem type
     6	FSTYPE="btrfs"
     7	
     8	# btrfs qgroup for space aware cleanup algorithms
     9	QGROUP=""
    10	
    11	# fraction of the filesystems space the snapshots may use
    12	SPACE_LIMIT="0.5"
    13	
    14	# users and groups allowed to work with config
    15	ALLOW_USERS="andreas andrea"
    16	ALLOW_GROUPS=""
    17	
    18	# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
    19	# directory
    20	SYNC_ACL="no"
    21	
    22	# start comparing pre- and post-snapshot in background after creating
    23	# post-snapshot
    24	BACKGROUND_COMPARISON="yes"
    25	
    26	# run daily number cleanup
    27	NUMBER_CLEANUP="yes"
    28	
    29	# limit for number cleanup
    30	NUMBER_MIN_AGE="1800"
    31	NUMBER_LIMIT="30"
    32	NUMBER_LIMIT_IMPORTANT="10"
    33	
    34	# create hourly snapshots
    35	TIMELINE_CREATE="yes"
    36	
    37	# cleanup hourly snapshots after some time
    38	TIMELINE_CLEANUP="yes"
    39	
    40	# limits for timeline cleanup
    41	TIMELINE_MIN_AGE="1800"
    42	TIMELINE_LIMIT_HOURLY="10"
    43	TIMELINE_LIMIT_DAILY="10"
    44	TIMELINE_LIMIT_WEEKLY="8"
    45	TIMELINE_LIMIT_MONTHLY="3"
    46	TIMELINE_LIMIT_YEARLY="0"
    47	
    48	
    49	# cleanup empty pre-post-pairs
    50	EMPTY_PRE_POST_CLEANUP="yes"
    51	
    52	# limits for empty pre-post-pair cleanup
    53	EMPTY_PRE_POST_MIN_AGE="1800"
    54	
/etc/snapper/configs/root
     1	
     2	# subvolume to snapshot
     3	SUBVOLUME="/"
     4	
     5	# filesystem type
     6	FSTYPE="btrfs"
     7	
     8	
     9	# btrfs qgroup for space aware cleanup algorithms
    10	QGROUP=""
    11	
    12	
    13	# fraction of the filesystems space the snapshots may use
    14	SPACE_LIMIT="0.5"
    15	
    16	
    17	# users and groups allowed to work with config
    18	ALLOW_USERS="andreas andrea fabian"
    19	ALLOW_GROUPS=""
    20	
    21	# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
    22	# directory
    23	SYNC_ACL="no"
    24	
    25	
    26	# start comparing pre- and post-snapshot in background after creating
    27	# post-snapshot
    28	BACKGROUND_COMPARISON="yes"
    29	
    30	
    31	# run daily number cleanup
    32	NUMBER_CLEANUP="yes"
    33	
    34	# limit for number cleanup
    35	NUMBER_MIN_AGE="1800"
    36	NUMBER_LIMIT="40"
    37	NUMBER_LIMIT_IMPORTANT="10"
    38	
    39	
    40	# create hourly snapshots
    41	TIMELINE_CREATE="yes"
    42	
    43	# cleanup hourly snapshots after some time
    44	TIMELINE_CLEANUP="yes"
    45	
    46	# limits for timeline cleanup
    47	TIMELINE_MIN_AGE="1800"
    48	TIMELINE_LIMIT_HOURLY="10"
    49	TIMELINE_LIMIT_DAILY="10"
    50	TIMELINE_LIMIT_WEEKLY="8"
    51	TIMELINE_LIMIT_MONTHLY="3"
    52	TIMELINE_LIMIT_YEARLY="0"
    53	
    54	
    55	# cleanup empty pre-post-pairs
    56	EMPTY_PRE_POST_CLEANUP="yes"
    57	
    58	# limits for empty pre-post-pair cleanup
    59	EMPTY_PRE_POST_MIN_AGE="1800"
    60	

Forgot to mention:
You can´t run it with sudo, you need to be root :wink: to run it.
Because snapper hides its configuration from normal users.
:footprints:

Yeap, i have turned off all automation from timeshift atm, i don’t want to remove it completely just yet as i might fall back to it yet if i cant get snapper working as i want it to.

No problem :slight_smile: , im past that part now but thx.

So what im seeing is it installs both cron and systemd timers, I can manually get the systemd stuff to work (although thats not a great user friendly way to have an app work), im just looking at why the /etc/cron.hourly/snapper is not working. It seems to be firing but is does not even get past the for-next loops so i suspect $SNAPPER_CONFIGS is empty, but finding out is my next step. Why is it built like this?

I stuck a little logging in:

#!/bin/bash

LOGFILE="/var/log/snapper-cron.log"
echo "Running snapper cron job at $(date)" >> $LOGFILE

for CONFIG in $SNAPPER_CONFIGS ; do
    TIMELINE_CREATE="no"
    . /etc/snapper/configs/$CONFIG

    echo "Processing configuration: $CONFIG" >> $LOGFILE
    if [ "$TIMELINE_CREATE" = "yes" ] ; then
        echo "Creating snapshot for $CONFIG" >> $LOGFILE
        snapper --config=$CONFIG --quiet create --description="timeline" --cleanup-algorithm="timeline" >> $LOGFILE 2>&1
    else
        echo "TIMELINE_CREATE is not set to yes for $CONFIG" >> $LOGFILE
    fi
done

for CONFIG in $SNAPPER_CONFIGS ; do
    NUMBER_CLEANUP="no"
    TIMELINE_CLEANUP="no"
    EMPTY_PRE_POST_CLEANUP="no"
    . /etc/snapper/configs/$CONFIG

    echo "Cleaning up configuration: $CONFIG" >> $LOGFILE
    if [ "$NUMBER_CLEANUP" = "yes" ] ; then
        echo "Running number cleanup for $CONFIG" >> $LOGFILE
        snapper --config=$CONFIG --quiet cleanup number >> $LOGFILE 2>&1
    fi

    if [ "$TIMELINE_CLEANUP" = "yes" ] ; then
        echo "Running timeline cleanup for $CONFIG" >> $LOGFILE
        snapper --config=$CONFIG --quiet cleanup timeline >> $LOGFILE 2>&1
    fi

    if [ "$EMPTY_PRE_POST_CLEANUP" = "yes" ] ; then
        echo "Running empty-pre-post cleanup for $CONFIG" >> $LOGFILE
        snapper --config=$CONFIG --quiet cleanup empty-pre-post >> $LOGFILE 2>&1
    fi
done

and a big P.S. i know i should start a new thread but meh.

I was tinkering with btrfs-assistant to see if i could figure anything and i noticed something really odd.

This is my lsblk

❱lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    0 931.5G  0 disk 
├─sda1        8:1    0 636.4G  0 part /mnt/ssd
├─sda2        8:2    0   300M  0 part 
└─sda3        8:3    0 294.8G  0 part 
nvme0n1     259:0    0 953.9G  0 disk 
├─nvme0n1p1 259:1    0   300M  0 part /boot/efi
├─nvme0n1p3 259:2    0   9.9G  0 part [SWAP]
├─nvme0n1p4 259:3    0 460.2G  0 part 
└─nvme0n1p5 259:4    0 483.5G  0 part /var/log
                                      /home
                                      /var/cache
                                      /

This is exactly how it should be. but when i run btrfs-assistant it looks like this

❱lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    0 931.5G  0 disk 
├─sda1        8:1    0 636.4G  0 part /mnt/ssd
├─sda2        8:2    0   300M  0 part 
└─sda3        8:3    0 294.8G  0 part 
nvme0n1     259:0    0 953.9G  0 disk 
├─nvme0n1p1 259:1    0   300M  0 part /boot/efi
├─nvme0n1p3 259:2    0   9.9G  0 part [SWAP]
├─nvme0n1p4 259:3    0 460.2G  0 part 
└─nvme0n1p5 259:4    0 483.5G  0 part /run/BtrfsAssistant/5bf86f84-a752-4841-bcba-21f873673765

and then if i close it its even more weird

❱lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    0 931.5G  0 disk 
├─sda1        8:1    0 636.4G  0 part /mnt/ssd
├─sda2        8:2    0   300M  0 part 
└─sda3        8:3    0 294.8G  0 part 
nvme0n1     259:0    0 953.9G  0 disk 
├─nvme0n1p1 259:1    0   300M  0 part /boot/efi
├─nvme0n1p3 259:2    0   9.9G  0 part [SWAP]
├─nvme0n1p4 259:3    0 460.2G  0 part 
└─nvme0n1p5 259:4    0 483.5G  0 part /var/log

so all my btrfs subvolume mounts have gone??

I tried sudo mount -a and it then mounted everything twice in the same place !!. ekkk.

Reboot system :frowning:

Linux is certainly not a walk in the park.

You will find that Timeshift and Snapper are completely different things. Both take snapshots with btrfs. But the differences are huge.

Once everything is set up, it is important to check everything is OK every now and then in the first 6 months. (So that btrfs does not overflow) :footprints:

yea, dont worry, im just in the testing phase with snapper atm. I am comfortable’ish with timeshift and now i want tot look at snapper to see if i can use it to do remote backups to an external btrfs system where i cant really see a way to do that with timeshift.
After i decided on what i want everything will be slimmed right back down to the bare minimum, all snapshots will be deleted and started anew, timeshift will go completely and i prob wont use most of the little front end gui’s either, we shall see.

Why has this code got errors in it? It looks like the /etc/conf.d/snapper config is not sourced in the script so $SNAPPER_CONFIGS is empty, so it would never have worked.

I would say that i was happy to use btrfs-assistant where you can switch on and off the services and forget about cron but i dont like what its doing to my mount points.

You may have a look at

Suggestions are welcome :wink:

1 Like

Thx, im almost there.

I just need someone to ease my worries about this lsblk thing.

❱lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    0 931.5G  0 disk 
├─sda1        8:1    0 636.4G  0 part /mnt/ssd
├─sda2        8:2    0   300M  0 part 
└─sda3        8:3    0 294.8G  0 part 
nvme0n1     259:0    0 953.9G  0 disk 
├─nvme0n1p1 259:1    0   300M  0 part /boot/efi
├─nvme0n1p3 259:2    0   9.9G  0 part [SWAP]
├─nvme0n1p4 259:3    0 460.2G  0 part 
└─nvme0n1p5 259:4    0 483.5G  0 part /var/log

After using btrfs-assistant it shows that iv lost my mounts of / & /home & /var/cache , it surly must be just a lsblk error as i can’t exactly loose my / and /home without noticing anything happening? But some assurance is needed :slight_smile:

what about

mount -t btrfs

:cry:

❱sudo mount -t btrfs
/dev/nvme0n1p5 on / type btrfs (rw,noatime,ssd,discard=async,space_cache=v2,subvolid=351,subvol=/@)
/dev/nvme0n1p5 on /var/cache type btrfs (rw,noatime,ssd,discard=async,space_cache=v2,subvolid=258,subvol=/@cache)
/dev/nvme0n1p5 on /home type btrfs (rw,noatime,ssd,discard=async,space_cache=v2,subvolid=349,subvol=/@home)
/dev/nvme0n1p5 on /var/log type btrfs (rw,noatime,ssd,discard=async,space_cache=v2,subvolid=259,subvol=/@log)

⎼⎼⎼⎼⎼⎼⎼⎼⎼⎼ /home/greg ⎼⎼⎼⎼⎼⎼⎼⎼⎼⎼
❱lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    0 931.5G  0 disk 
├─sda1        8:1    0 636.4G  0 part /mnt/ssd
├─sda2        8:2    0   300M  0 part 
└─sda3        8:3    0 294.8G  0 part 
nvme0n1     259:0    0 953.9G  0 disk 
├─nvme0n1p1 259:1    0   300M  0 part /boot/efi
├─nvme0n1p3 259:2    0   9.9G  0 part [SWAP]
├─nvme0n1p4 259:3    0 460.2G  0 part 
└─nvme0n1p5 259:4    0 483.5G  0 part /var/log

⎼⎼⎼⎼⎼⎼⎼⎼⎼⎼ /home/greg ⎼⎼⎼⎼⎼⎼⎼⎼⎼⎼

I have checked the mount list and its all there which is why I’m assuming its a lsblk bug, but I’m not a guru but the lsblk devs are.

1 Like

Well yea it sure looks cool, it worked once. It did a full backup of my local btrfs drive and i can mount that drive and see the snapshots but after that it just errors every time i try to run it saying

java.io.FileNotFoundException: Could not create dir: /tmp/BackupRoot/@BackSnap/manjaro18/snapshots

I mounted the external btrfs fs and found “@BackSnap/manjaro18/snapshots”. i deleted it. It ran again without the same error, a different one.

x: ERROR: parent subvolume /tmp/BtrfsRoot/timeshift-btrfs/snapshots/2024-06-15_11-00-00/@ is not read-only
x: ERROR: empty stream is not considered valid

But it did at least show the backup snapshots in the right hand window, but after that i try to run it again and its back to the original error about cant create folder snapshots.

I suspect its your github project? I shall post a fuller issue post on github.

Dont suppose you have seen the same error?

I was the same way. I started with sending single Timeshift snapshots over. They are large, but it worked, and it was easy.

I wouldn’t recommend it, but you can send incremental differences between Timeshift snapshots as well. I did it because I was new to btrfs, and I enjoy learning that way.

This is one of the biggest advantages to Snapper, it will handle all the differences automatically once configured, and do it efficiently.

Challenge accepted.

That is what backsnap tries to do :wink:

The newest version is in a branch


This indicates, that you did not specify the right parameters. Then the program will fallback to my own defaults (manjaro18) :wink:

I suggest you use the newest Version in the branch.
You best create a configfile in /etc/backsnap like:

cat /etc/backsnap.d/local.conf
# backup local pc per sudo
pc = localhost
# backup local pc per ssh
# pc = root@localhost

# detect and mount backupvolume by scanning for this id (as part of the uuid)
backup_id = 0341703XX3745-4ae7-9451-efafcbb9124e

# use these flags for the following backup (optional)
flags = -c -v=5 -a=5 -o=4000 -m=1000
# backuplabel = manjaro18 for snapshots of /
manjaro18 = /

flags = -c -v=5 -a=5 -o=4000 -m=1000
# backuplabel = manjaro18.home for snapshots of /home
manjaro18.home = /home

This configuration will create 2 backups from local snapshots of / and /home and name them manjaro18 (the name of this PC in backups) and manjaro18.home

You can use any name [a-zA-Z0-9_.]+

  • If you run backsnap --help you will see your configuration being read
  • If an error occurs, try again with -v=… increased
  • If you run backsnap -gc you will see the GUI
Usage:
------
/usr/local/bin/backsnap [OPTIONS]

 -h --help           show usage
 -x --version        show date and version
 -d --dryrun         do not do anything ;-)
 -v --verbose        be more verbose (-v=9)
 -s --singlesnapshot backup exactly one snapshot
 -g --gui            enable gui (works only with sudo)
 -a --auto           auto-close gui when ready
 -c --compressed     use protokoll version2 for send/receive (if possible)
 -i --init           init /etc/backsnap.d/local.conf (only with -g)
 -o --deleteold      mark old backups for deletion in gui (-o=500)
 -m --keepminimum    mark all but minimum backups for deletion in gui (-m=250)  
 
 -o,-m,        need  manual confirmation in the gui to delete marked snapshots
 -i            needs gui to confirm uuid of backup-medium
  

:footprints:

Yea it a great project. I have got it to work. Its mostly the gui that errors with all sorts of errors but as long as you dont use the gui it might be usable. The only thing i want to know is how do you run it so that it will automatically delete all the “unneeded” snapshots?

Iv got the configs set to -m=10 in am attempt to see if it will delete snapshots as they are removed from the timeshift list. It will do it in the gui (if you can get it to load) but you have to click the “delete some unneeded snapshots”, i just hope if i set the -m option that it will do it automatically.

Just thought id give you some of the gui errors, if you want.

Summary

so its giving this mostly at the moment

❱sudo backsnap -g
BackSnap Version 0.6.7.10 (2023/11/02)
args >  -g -v=10 
java [version=22, major=22, minor=null, patch=null]
using ThreadPerTaskExecutor
Pc[sudo ] & Id:4ac04e63-d68d-47d1-a6c5-f33d9f7c29de
OneBackup[srcPc=Pc[sudo ], srcPath=/, backupLabel=manjaro18, flags=-t -v=1 -m=5]
mount -t btrfs 
sudo mkdir --mode=000 -p /tmp/BtrfsRoot;sudo mount -t btrfs -o subvol=/ /dev/nvme0n1p5 /tmp/BtrfsRoot
mount -t btrfs 

sudo btrfs subvolume list -apuqRcg /
home
sudo btrfs subvolume list -apuqRcg /tmp/BtrfsRoot
sudo btrfs subvolume list -apuqRcg /var/cache
sudo btrfs subvolume list -apuqRcg /var/log
SnapConfig [volumeMount=/, snapshotMount=/]
sudo btrfs subvolume list -apuqRcg /
sudo btrfs subvolume show /
Backup snapshots from sudo :/
sudo btrfs filesystem show  -d
sudo mkdir --mode=000 -p /tmp/BackupRoot;sudo mount -t btrfs -o subvol=/,compress=zstd:9  /dev/sda3 /tmp/BackupRoot
mount -t btrfs 
Try to use backupDir  sudo :/tmp/BackupRoot
btrfs filesystem usage /tmp/BackupRoot;btrfs device usage /tmp/BackupRoot
sudo btrfs subvolume list -apuqRcg /tmp/BackupRoot
Snap: 2024-06-09_18-00-00 2024-06-10_16-10-54 2024-06-10_18-00-00 2024-06-11_18-00-00 2024-06-12_04-27-01
 2024-06-12_20-00-01 2024-06-13_20-00-01 2024-06-14_07-31-41 2024-06-14_07-32-06 2024-06-14_20-00-00 2024-06-15_20-00-00
 2024-06-16_00-00-00 2024-06-16_01-00-00 2024-06-16_09-00-00
Backup: 2024-06-09_18-00-00 2024-06-10_16-10-54 2024-06-10_18-00-00 2024-06-11_18-00-00 2024-06-12_04-27-01
 2024-06-12_20-00-01 2024-06-13_20-00-01 2024-06-14_07-31-41 2024-06-14_07-32-06 2024-06-14_20-00-00 2024-06-15_20-00-00
 2024-06-16_00-00-00 2024-06-16_01-00-00 2024-06-16_09-00-00

Skip: 2024-06-09_18-00-00 2024-06-10_16-10-54 2024-06-10_18-00-00 2024-06-11_18-00-00 2024-06-12_04-27-01
 2024-06-12_20-00-01 2024-06-13_20-00-01 2024-06-14_07-31-41 2024-06-14_07-32-06 2024-06-14_20-00-00 2024-06-15_20-00-00
 2024-06-16_00-00-00 2024-06-16_01-00-00 2024-06-16_09-00-00
sudo umount -v /tmp/BtrfsRoot;sudo rmdir /tmp/BtrfsRoot
mount -t btrfs 

sudo btrfs filesystem show  -d
sudo umount -v /tmp/BackupRoot;sudo rmdir /tmp/BackupRoot
sudo btrfs subvolume list -apuqRcg /tmp/BackupRoot
mount -t btrfs 
ende:X readyBackup:
/
home
/var/cache
/var/log
Exception in thread "" java.lang.RuntimeException: 
Could not find the volume for backupDir: /tmp/BackupRoot/@BackSnap/null
Maybe it needs to be mounted first
        at de.uhingen.kielkopf.andreas.backsnap.btrfs.Pc.getBackupMount(Pc.java:223)
        at de.uhingen.kielkopf.andreas.backsnap.gui.BacksnapGui.lambda$14(BacksnapGui.java:758)
        at java.base/java.util.concurrent.ThreadPerTaskExecutor$TaskRunner.run(ThreadPerTaskExecutor.java:314)
        at java.base/java.lang.VirtualThread.run(VirtualThread.java:321)

but if i keep trying, i will eventualy get one to work (no changes to settings), and i could then delete the unneeded files via the gui

❱sudo backsnap -g
BackSnap Version 0.6.7.10 (2023/11/02)
args >  -g 
java [version=22, major=22, minor=null, patch=null]
using ThreadPerTaskExecutor
Pc[sudo ] & Id:4ac04e63-d68d-47d1-a6c5-f33d9f7c29de
OneBackup[srcPc=Pc[sudo ], srcPath=/, backupLabel=manjaro18, flags=-t -v=1 -m=5]
sudo mkdir --mode=000 -p /tmp/BtrfsRoot;sudo mount -t btrfs -o subvol=/ /dev/nvme0n1p5 /tmp/BtrfsRoot
Backup snapshots from sudo :/
sudo mkdir --mode=000 -p /tmp/BackupRoot;sudo mount -t btrfs -o subvol=/,compress=zstd:9  /dev/sda3 /tmp/BackupRoot
Try to use backupDir  sudo :/tmp/BackupRoot
Snap: 2024-06-09_18-00-00 2024-06-10_16-10-54 2024-06-10_18-00-00 2024-06-11_18-00-00 2024-06-12_04-27-01
 2024-06-12_20-00-01 2024-06-13_20-00-01 2024-06-14_07-31-41 2024-06-14_07-32-06 2024-06-14_20-00-00 2024-06-15_20-00-00
 2024-06-16_00-00-00 2024-06-16_01-00-00 2024-06-16_09-00-00

Skip: 2024-06-09_18-00-00 2024-06-10_16-10-54 2024-06-10_18-00-00 2024-06-11_18-00-00 2024-06-12_04-27-01
 2024-06-12_20-00-01 2024-06-13_20-00-01 2024-06-14_07-31-41 2024-06-14_07-32-06 2024-06-14_20-00-00 2024-06-15_20-00-00
 2024-06-16_00-00-00 2024-06-16_01-00-00
manjaro18: Backup of 2024-06-16_09-00-00 based on 2024-06-16_01-00-00
x: At subvol /tmp/BtrfsRoot/timeshift-btrfs/snapshots/2024-06-16_09-00-00/@
l: 4.77MiB 0:00:00 [71.7MiB/s] [<=>                                               ]

At snapshot @


to remove 2024-06-08_18-00-00
/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-08_18-00-00/@
Delete subvolume 414 (commit): '/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-08_18-00-00/@'
to remove 2024-06-15_09-00-00
/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-15_09-00-00/@
Delete subvolume 425 (commit): '/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-15_09-00-00/@'
to remove 2024-06-15_10-00-01
/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-15_10-00-01/@
Delete subvolume 426 (commit): '/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-15_10-00-01/@'
to remove 2024-06-15_11-00-00
/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-15_11-00-00/@
Delete subvolume 427 (commit): '/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-15_11-00-00/@'
to remove 2024-06-15_21-00-00
/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-15_21-00-00/@
Delete subvolume 449 (commit): '/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-15_21-00-00/@'
to remove 2024-06-15_22-00-00
/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-15_22-00-00/@
Delete subvolume 450 (commit): '/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-15_22-00-00/@'
to remove 2024-06-15_23-00-01
/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-15_23-00-01/@
Delete subvolume 451 (commit): '/tmp/BackupRoot/@BackSnap/manjaro18/2024-06-15_23-00-01/@'

At the moment i disabled deleting in cli. Deleting is only possible from GUI at the moment. So you can see what you delete.

Deleting is only necessary once in a while (every 2-3 month)
:footprints:

I had given up on snapper, it was to frustrating, i moved on to other methods and banged my head on the wall with that instead (another story).

Anyhow, i recently when back to look at snapper again. I installed a clean OS on VB and just snapper (no timeshift) but it was still doing the same thing (taking 2 auto hourly snapshots).

I have just now found the answer. It wasn’t me going mad after all.

https://unix.stackexchange.com/questions/425570/snapper-has-recently-started-performing-duplicate-snapshots-each-hour