Hello,
i have to frequently backup my Raspberry Pi.
Until now i used to shutdown the raspberry, plug the SDcard into my Manjaro System and backed up the whole SDcard via sudo dd if=...
I now finally had time to get the same thing going via SSH by using ssh pi@[ip] "sudo -S dd if... | gzip -" | dd of=... status=progress
(With the awesome help of this community)
Backing up via directly plugging in the SDcard, creates an image of nearly the size of the card = ~30GB
Backing up via SSH creates files only the size of ~5,3 GB
Here are my questions:
The filesize from SSH-Backup is smaller because of the “gzip” compression option?
Despite the SSH-Backup .img file is much smaller, it should include absolutely everything?
The SSH Backup is made during RPi runtime and needs very much time → Is there a chance, that the Rpi-System changes too much during Backup, so that the created img wil not run after flashing because there are inconsistencies in the img-file, caused by the “Slow backup during changing system-files”
Im afraid, that im backing up and backing up and in an emergency the images are not working.
I did a test, where flashing such a compressed .img, generated via SSH worked to recover the system without any errors (until now)
dd of a mounted disk is not “proper”. Will it work most of the time, yes but it is a “dirty” backup. Better than nothing but not a good backup plan. There are examples of how to backup a live system which can be done via ssh, but involves the use of rsync which excludes the state files (/run /sys /proc etc).
Yes, the use of zip will make an output file that is smaller in size than the image created with dd.
I suspect the size difference is because of multiple reasons:
When you back it up directly, there is not compression. And VIA ssh there is.
I suspect there is a lot of free space on the SD card? Free space is…well, free, so it compressed extremely well, as there is no info to keep. I suspect if you backup a 20GB card, with gzip like above, it’ll only be a few MBs, if not KBs.
Seeing as you restored it fine, I don’t think there’ll be a problem in that regard.
Regardings rsync:
Currently im using rsync on my Manjaro-System via Timeshift
For my Raspberry i think this would be problematic because:
Using rsync without Timeshift → No version control after sync is done → Risk of syncing already present errors in the current system? → I would like to go back to e.g. the 3rd previous backup etc.
Using Timeshift → Need to have a ext4 storage mounted to the system?
rsync generally is just copying files and not creating flashable images for “fast-recovery”???
In general:
One very important part of my Raspberry are my docker volumes located at /var/lib/docker/volumes
where (i think) only root has access to
Looks like my problem is, that i want to run a backup of a running system while not running → So its impossible
I think my solution will be:
Automated backups (e.g. via cron) with dd via ssh
Each and every time and before big changes i will create a manual SDcard backup which are the “clean” backups
I will research if backup of /var/lib/volumes/* is convenient in my case
Pro: You would get a clean filesystem without fragmentation
Con: You would need to prepare bootloader and partitons bevorehand
Solution: Use a basis image, then overwrite with rsync.
P.S. I work with lots of embedded devices like raspi and would NEVER use dd in a running system
I also thought about pushing this Stuff to Git, because im already doing this with my dotfiles.
At the moment my content is not that big, but i added a dockerized influxdb database and soon it could grow.
Also i dont know about all the “secrets” and “privacy-data” i would push to git …
All of this stuff i want to simply fully re-flash in case of e.g. a SD-Card breaks
It can download write a whole heap of different images and for some of the options you can even specify system options such as ssh server, initial username and password - among other things.