Timeshift - Best way to create a backup/restore point for the future

Knowing that it is now running on ext4. There is one advantage of Timeshift rsync, over many other (ext4) options, it will not duplicate files if they are identical. Which helps multiple snapshots not take up as much space. (Using hard links.) Which does help time and space, especially when most changes are pacman updates. (Replacing entire files is common across any binary packaged Linux distribution.) Which by chance, suits this use case, as any Manjaro user should be doing this regularly.

Before you were coming up with your own rsync command, and I even forgot this with that unfortunate tangent. Did you try the exclude options within the GUI (or JSON config file)?

On Manjaro or any Arch based distro this simply does not really work, the whole set of packages is modified really quickly. Snapshots take lot of space overall. Anyway, you’re not supposed to keep lot of snapshots it doesn’t help to get 8 months back, just keep a couple snapshots is enough in case you need to rollback an update. So in the end, snapshots don’t take too much space because you just don’t keep more than a couple of them (or else you’re doing it wrong in my opinion).

I was comparing to my copy-on-write filesystems route, where block level deltas are very efficient (i.e. btrfs/zfs/etc).

Replacing the whole file, from CoW FS/snapshot perspective, compared to rsync; is about the same amount of data transferred. As opposed to small changes on large files, where rsync is terrible. And the way of hard linking whole files suits an environment of just updating with a package system that replaces entire files (like most distros, including Manjaro). Of course it gets bigger with every package. Almost the same amount as what pacman says the installed size will be, wow go figure!

Says who?

I keep over 2 years back, and just delete old ones to stay under 75%-ish. Not sure what kind of storage you got over there.

They even get archived to my 48TB btrfs NAS box, as there is lot of free space there. And just last week, it was really handy getting the libvirt config of a guest from years ago (that I’d thought I’d never need again.) Saved me a lot of time! And that isn’t an isolated case.

Backups take space. But Timeshifft rsync, is far better than copying them outright, which many backup basically solutions do. It is a ton of space saved when you want to use these Timeshift psuedo-snapshots. I can afford to rewrite every file on my system many many times. So if you have ample free space, and have the use case, why not just use it?

You wrote

To which I replied

Because of that fact above, subsequent snapshots still take lot of space, that’s all.

Me, or the logic of a Rolling Release distro.

You do you, but to me that makes absolutely no sense at all, the point of the system snapshots being to be able to roll back a previous failed update, or a previous failed system change you did messing around in the system.

Indeed, I didn’t deny that. But again back to my previous point.

1 Like

Twice now, I’m just trying to help. And OP seems to want Timeshift’s psueso-snapshots (since real snapshots are off the table). And wants to exclude certain parts (that’s even in Timeshift’s GUI). It’s actually getting hard to tell now. And I was just explaining advantages of this method, because they are not apparent to anyone new to it, on how it leverages hard links.

The first time, no one had enough information. So I got excited about a method that’s even better than this. But I was just working with what I had…

“Rolling release = 2 or 3 plus snapshots bad?” I have no words. Logic not compute.

There are point release distros out there that do much more file updating than Manjaro. And many applications that leverage snapshots that do much more IO, between snapshots. There are plenty of use cases, even on a desktop.

Part, and only part, of the reason you can now do this on your desktop is lightning fast consumer 4TB SSDs are cheap! But if you want to keep your 64TB SSD going for another decade, or whatever you run over in Omanoville, by all means!

I will keep doing me. I have a four tiered storage system. You have proven you wouldn’t understand, but this isn’t about me. So why discourage what OP wants?

Plus, I am talking about backing up snapshots or rollback points. Timeshift rsync does this, I would consider it a hacky way. But if I didn’t have a CoW FS, I would probably be doing it too.

For most of my careeer I got paid to understand filesystems.

About three decades ago, I was backing up VxFS (Veritas) snapshots, then a few years later, ZFS snapshots on Solaris. Then fast forward decades, FreeBSD was way quicker to catch up than Linux ever was (at least in this department, but blame open source licensing). These were the early days of software based filesystem snapshotting, and you could not even do it on Linux. (Even now, there’s a big asterisk right of it.) But the people that saw value in this, leveraged it.

Before all that (hell, even currently), we had to buy hard drive arrays to do snapshots, ones that could hold up to 128 HDDs. Then newer fancier NetApp boxes came around. But to leverage everyones’ snapshotting needs, you had to pay hundreds of thousands of USD, just for one of the licenses (for example) to buy SnapMirror just to send snapshots remotely. When this was new, it opened so many possibilities that were never possible before.

This just isn’t useful in the enterprise area of computing. Now we can do all that with just open source software, and OP just needs two drives. He sees a use for this, why can’t you?

As far as I know, one limitation of rsync is that it doesn’t support native atomic snapshots. This can lead to inconsistent data if files are changed during automatic incremental backup process. Many different UI tools such as BackIntime, Timeshift … use rsync.

Some CoW such as Btrfs prevents this issue as it takes an atomic snapshot in less than 1 second on slow HDD, then sends it to another backup disk by some tool.