From my research, it seems to be a problem with the core-utilities? but I could not tell you what, or which one?
if I prefix it with sudo I do not get this error message " df: /run/user/1000/doc: Operation not permitted "
The /run directory hierarchy exists on a tmpfs, and thus in virtual memory. This means that the hierarchy must be recreated at every boot, and this in turn includes setting up the permissions for the directories in that hierarchy.
Now, for some reason, /run/user/1000/doc is always created with the read permission removed for the owner ─ user 1000, which is you (or otherwise put, the first non-privileged user account created during installation).
We ─ i.e. the members of the community ─ have been searching ourselves silly for a long time already for what it is that requires this directory to exist, and with those permissions. And so far we still haven’t found it.
Ok, just to clarify for a non computer scientist/programmer, my system is not broken? so I do not have to revert back using Timeshift or reinstall Manjaro?
It was patched in RedHat / Fedora for coreutils version 8.32, but perhaps vanilla upstream won’t get the fix until later, which includes those who use Arch and Manjaro.
This announcement comes at over a year-and-a-half since their previous release of 8.32.
UPDATE: The way I’m seeing things, it appears that it’s been patched by RedHat / Fedora and Ubuntu for their own packages. Not sure if it went further up, or if it even matters for version 9+. I don’t believe Arch Linux package maintainers patch vanilla software unless for urgent reasons. After all, Arch’s coreutils hasn’t been updated since April 2020, which is only one month following the upstream release of 8.32.
from the df output? which part of the forum should I post it on?
when I installed Manjaro KDE I only made a
boot/efi partition
root/home partition
swap partition
what are these “partitions” that have allocated quite a lot of space to them? I’m pretty sure it should its normal because my virtual machine has them as well and I separated the /root and /home partitions.
Thank you for such quick and easy-to-understand answers!
the reason I was asking is that the df output does not make sense to me.
This is a 1TB nvme0n1p2 drive (usable space 931.5GB ish)
it says Size, used and free space =
Disk size = 882GB
Used space = 300GB
Free space = 538GB
Swap = 32GB (it does not show my swap partition for some reason)
300GB + 538GB + 32GB = 870GB–> how big the disk size should be according to these numbers?
882GB - 300GB - 538GB - 32GB = 12GB → Difference between reported disk size and adding used, available, and swap disk size together.
882GB - 300GB -32GB = 550GB → Difference between actual disk size and adding used and swap space toghter.
931.5GB - 882GB = 49.5GB → Difference between actual size and what “df” is reporting
931.5GB - 300GB - 32GB = 599.5GB → Difference between what actual disk size is and what “df” is reporting as used space plus my swap partition.
I know I’m probably way off, on how this actually works with my thinking/reason behind this question!
Should I not have a minimum of at least 12GB-ish more free space (best case scenario)
or a minimum of at least 61.5GB-ish more free space (worst case scenario)
If I’m so far out on the field that it’s not funny just tell me and I will go back to studying and researching this on the interweb.
There’s a lot that might factor into it, but it’s a discussion for elsewhere (a new thread), and might not do too well in here (mostly a support forum).
Unless you’re wondering why you’re missing space and how to “reclaim” it.
It depends on whether the program reports in binary or decimal. Drive manufacturers advertise in decimal, while Linux reports in binary (usually).
Ext4 reserves a portion for superuser reserved blocks (to mitigate against fragmentation and to prevent the system from being unbootable due to hitting 100% capacity.) Not sure how relevant that is today (fragmentation is not an issue for SSDs), but for example, the default parameter upon creating an Ext4 file-system is to reserve 5% for this superuser block space.
Five percent out of a 1 TB disk a lot of space if you ask me.
One percent is much more sane, or even zero percent if it’s used strictly for archival storage purposes.
We should probably leave it at that. Going off topic now.
I see, than you for educating me, I try to learn Linux as much as I can and as fast as I can
The reason I was asking about this was I wanted to know how to reclaim it or if it was possible to reclaim it. This is not a huge deal but if I can get an extra 50GB of storage I would talk it:) I’ll open a new topic for this.
It’s possible to use tune2fs to change this reservation. However, I’m not sure if it’s safe to run on a mounted file-system. You might have to reboot into a live USB session and issue,
sudo tune2fs -m 1 /dev/nvme0n1p2
The "-m 1" means 1%. Upon creation of a new Ext4 file-system, mkfs.ext4 defaults to "-m 5" (hence 5%).