I don’t know, never tested, just something I’ve picked up somewhere (probably 7-zip on Windows).
But perhaps useful info for users of old computers.
I’ve settled for 6 threads for xz (i.e. half of the available threads).
I did some more tests with xz. I used the google-chrome package. I did that on a Intel Core i5-4670 CPU and all files were stored on a HDD. My System has 16 GB of RAM. On systems with less memory, xz should also use less memory there. It is not really scientific since I run all commands only once, so there is no Error analysis.
The google-chrome.pkg.tar file has a size of 273489920 byte.
Can you please give me a hint what to use, when i want not to use all cores with “–threads=0” - but more than one?
Afaik compression can set from 0 to -9 - higher number means higher compression.
How is the influence on this if i use a SSD with Btrfs mounted with the options defaults,noatime,space_cache,autodefrag,compress=lzo ?
There is enough space left to use the 2, 3 packages i built from AUR.
That is true, but it is of course slower. And the goal of this threat is faster not better compression.
I/O is not the bottleneck of xz on higher presets, your CPU is it. And it might even be slower, because the filesystem is compressed, too.
The same computer, but with little more going on. So it is not really comparable to the other test.
Option -6 same tar file in RAM 1:29.22 and on my SSD 1:31.80 and again HDD 1:32.70. I would say it is nearly the same.
If you build aur packages, first try building in RAM. Normally you can use your /tmp, which is mounted with tmpfs.
If you only want to install these packages and don’t want to save or use these packages on other computer, don’t compress at all.