Best way to create system and files backups

restic restore requires target dir. If I restoring whole system, am I need to set “/” target? Will it overwrite files and delete files which are not in snapshot?

Yes, it overwrites files, but I do not think it deletes files.

This can be a problem, as snapshots must be a state of the system that can be completely restored.I’ll check if it deletes files

Check out the feature that isn’t implemented yet


There can be a workaround via restic mount and rclone sync:

$ sudo restic mount <mount_dir> 
$ rclone sync <mount_dir>/ids/<snapshot_id>/ <target_dir>/

rclone sync will delete local files that aren’t present in the snapshot.

1 Like

There is rustic that is the successor of restic, but rustic is written in full rust language instead GO language.

Both have the same features, but rustic supports to restoring with the option --delete to delete files that aren’t present in the snapshot, but it is in beta

It is available in AUR

1 Like

Make Backups:

  1. You could create different btrfs-subvolumes which produce different scopes for snapshots
  2. You could enable compression in btrfs. This is at least as efficient as 7zip
  3. You could use btrfs-send together with ssh :wink:

Please have a look at Btrfs in the wiki

1 Like

It’s a little different. I don’t need cold backups, something like “timeshift with ftp” is enough

clonezilla

i would never backup to the drive i’m using, it must be always a external device. i like the option to backup a real clone. once my disk will die i’ll replace the drive just with the clone and i’m back on track. less than 5 minutes of work and even the worst case (broken disk) is solved. using nvme’s gives acceptable speed and while backing up i’ll get a coffee and a smoke. what can be better ?

i had broken disks a couple of times in my life and that’s the moment when you have to have a real solution. clonezilla is the key

On my personal computer, I make backups to an external hard drive, it’s not a problem there. But on the server it is desirable to make a backup with sending to external storage

The difference between timeshift and borg is just that:

  • timeshift is made for hot backups
  • borg is made for cold backups

So think of timeshift like on Windows “System Recovery”.

I would suggest using timeshift for the root directory and borg for important files. Any configurations can go to a git repo.

If it is btrfs, you can increase the compression level.

You can use btrfs send to grab a raw stream of the snapshot and pipe it to a file. Also btrfs receive can get the file and put it back on the partition.
:notebook: A raw stream cannot be mounted and is not searchable.

that’s quite a more complex problem in fact, but in the end anything else than a mirror-server is pain in the butt. restoring a broken server is a no go

Also wasn’t specified if this is a VPS or a homeserver, you’ll get different options for each.

For a homemade backup you’ll probably only want to save the appropriate data in /var, and make sure you aren’t transferring it in in the plain if using FTP.

You should be careful, zip does not support unix permissions.
Many special permissions of system files will be lost after restoring.
AFAIK, tar supports unix permissions.

VPS\Dedicated

I have Synology NAS for this. One of the features of a NAS are exactly backups. My backups are set up as following:

  • system files specifically / partition (no home, no mounts): timeshift
  • /home partition: instant backup with Synology Drive Client
  • other partitions: not backed up

I recommend a similar setup, timeshift is great for undoing updates that went bad. That lowers my enxiety when updating because some things on the system absolutely need to work 100% of the time.

A NAS solution may be expencive, but it will always cost less than you personal files. I learned that once. Synology seems to work very well with its home cloud solution, but you can also build you own server and with some open source cloud solution like owncloud. A cloud solution is also better than others because files get synced instantly as soon as you save them and they can be easily configuret to be one way or both ways and also a good cloud solution will also handle versioning of your files.

I also have other partitions/folders that I don’t backup. Those contain unimportant files, like games and temporary downloads.

When it comes to saving backups, it a rather a deep rabit hole… :stuck_out_tongue: some crazy gurus say, for a real backup, you need one local copy then one copy on another location and finally another copy on another continent :stuck_out_tongue_winking_eye: For me a backup on a NAS with full redundancy is enough for me.

Unless you also wan’t to hack with rsync and whatnot along the way, I recommend timeshift + cloud solution.

My favorite method for backin up the system is CloneZilla. It can store the output file wherever you want, even on a remote computer. And it has different compression methods with my favorite compression being z9p which is designed for multicore systems and compresses 36 GB to a 16 GB file in less than 3 minutes. Decompression is even faster.

1 Like

you’re right, clonezilla is still the swiss-knife but when it comes to a server-backup it’s weaken and that’s the main problem of the thread-opener. a good backup-strategy for servers is far-way difficult, especially if the server has to run 24/7

I see. I thought the OP wanted to store the backup on an online server, not to backup the server itself. If the server is on a VM, then the easiest way would be to copy the whole VDI file (the file that represents the virtual hard drive) and store it somewhere else.

Well, in that case you have the option to either use the paid backup service of the provider, which usually does exactly what you are looking for, saving snapshots of the image.

Other choice is what i already outlined, save only the variable data (web, database, mail, whatever you have) and logs, the rest of the server can be recreated any time and its contents should not matter.
For that I recommend using rsync and SSH, but you can use FTP, just make sure its encrypted.

Note that you likely have transfer quotas, and transferring manual backups will count towards that, which is another reason doing full system backups on your own is not recommended.

Not really. Let’s say I have:

  1. VPS or dedicated server
  2. FTP storage

Having them, I want to make a full backup of the VPS system to an FTP storage. In case of data loss / crash / hack / meteorite fall, I can easily download a snapshot from the storage and restore the system along with all the data

I can just save my sites and other data. But a system snapshot allows you to save much more - packages, configs, and so on. The system administrator does not have to do everything again. In addition, a full snapshot will even allow you to switch to another hosting without losing your system

Unfortunately, cheap providers do not have this. It is much more profitable for me to rent a 150GB FTP storage than to look for a hosting with such a service