Df: /run/user/1000/doc: Operation not permitted

I’m not sure this is the right place to post this question/error reporting? Please feel free to move it to the right place if it’s the wrong one!

when I use the df command I get this error message at the top, which I have never had before:

df: /run/user/1000/doc: Operation not permitted
Filesystem      Size  Used Avail Use% Mounted on
dev              16G     0   16G   0% /dev
run              16G  2.0M   16G   1% /run
/dev/nvme0n1p2  882G  300G  538G  36% /
tmpfs            16G     0   16G   0% /dev/shm
tmpfs            16G   60M   16G   1% /tmp
/dev/sda1       110G   32G   73G  31% /mnt/Adata
/dev/nvme0n1p1  300M  288K  300M   1% /boot/efi
/dev/loop4       56M   56M     0 100% /var/lib/snapd/snap/core18/2128
/dev/loop0       33M   33M     0 100% /var/lib/snapd/snap/snapd/12704
/dev/loop3       62M   62M     0 100% /var/lib/snapd/snap/authy/6
/dev/loop1       66M   66M     0 100% /var/lib/snapd/snap/gtk-common-themes/1515
/dev/loop2      165M  165M     0 100% /var/lib/snapd/snap/gnome-3-28-1804/161
tmpfs           3.2G   80K  3.2G   1% /run/user/1000

From my research, it seems to be a problem with the core-utilities? but I could not tell you what, or which one?
if I prefix it with sudo I do not get this error message " df: /run/user/1000/doc: Operation not permitted "

dev              16G     0   16G   0% /dev
run              16G  2.0M   16G   1% /run
/dev/nvme0n1p2  882G  300G  538G  36% /
tmpfs            16G     0   16G   0% /dev/shm
tmpfs            16G   61M   16G   1% /tmp
/dev/sda1       110G   32G   73G  31% /mnt/Adata
/dev/nvme0n1p1  300M  288K  300M   1% /boot/efi
/dev/loop4       56M   56M     0 100% /var/lib/snapd/snap/core18/2128
/dev/loop0       33M   33M     0 100% /var/lib/snapd/snap/snapd/12704
/dev/loop3       62M   62M     0 100% /var/lib/snapd/snap/authy/6
/dev/loop1       66M   66M     0 100% /var/lib/snapd/snap/gtk-common-themes/1515
/dev/loop2      165M  165M     0 100% /var/lib/snapd/snap/gnome-3-28-1804/161
tmpfs           3.2G   80K  3.2G   1% /run/user/1000

I got it after installing (rmlint) and (Czkawka) I installed rmlint via add/remove software and downloaded the Appimage of czkawka from here " GitHub - qarmin/czkawka: Multi functional app to find duplicates, empty folders, similar images etc. " I downloaded czkawka because (fslint) said this was more up to date and regularly maintained.

is this error a common thing or did these two packages break something up on my system??

I’m afraid it’s a common (and old) phenomenon.

The /run directory hierarchy exists on a tmpfs, and thus in virtual memory. This means that the hierarchy must be recreated at every boot, and this in turn includes setting up the permissions for the directories in that hierarchy.

Now, for some reason, /run/user/1000/doc is always created with the read permission removed for the owner ─ user 1000, which is you (or otherwise put, the first non-privileged user account created during installation).

We ─ i.e. the members of the community ─ have been searching ourselves silly for a long time already for what it is that requires this directory to exist, and with those permissions. And so far we still haven’t found it. :frowning:

:man_shrugging:

It’s an upstream bug: 1913358 – df: /run/user/1000/doc: Operation not permitted


To be clear, it’s harmless. An aesthetic annoyance, but everything’s working normally.

1 Like

Update: winnie just answers this question.

Ok, just to clarify for a non computer scientist/programmer, my system is not broken? so I do not have to revert back using Timeshift or reinstall Manjaro?

thank you for such a quick answer!

1 Like

Nope, you’re all good.

It was patched in RedHat / Fedora for coreutils version 8.32, but perhaps vanilla upstream won’t get the fix until later, which includes those who use Arch and Manjaro.

1 Like

winnie Thank you for clarifying it for a dummy!

Well hot dog, a brand spanking new release, version 9, is (supposedly) right around the corner! :slightly_smiling_face:


Written by the coreutils maintainer just last week:

This announcement comes at over a year-and-a-half since their previous release of 8.32.


UPDATE: The way I’m seeing things, it appears that it’s been patched by RedHat / Fedora and Ubuntu for their own packages. Not sure if it went further up, or if it even matters for version 9+. :man_shrugging: I don’t believe Arch Linux package maintainers patch vanilla software unless for urgent reasons. After all, Arch’s coreutils hasn’t been updated since April 2020, which is only one month following the upstream release of 8.32. :confused:

“The Arch way” or something like that.

wow, so either it has not been a priority or it’s been a tricky thing to fix. hehe

if I wanted to ask questions about

dev              16G     0   16G   0% /dev
run              16G  2.0M   16G   1% /run
tmpfs            16G     0   16G   0% /dev/shm
tmpfs            16G   61M   16G   1% /tmp

from the df output? which part of the forum should I post it on?

when I installed Manjaro KDE I only made a

  • boot/efi partition
  • root/home partition
  • swap partition

what are these “partitions” that have allocated quite a lot of space to them? I’m pretty sure it should its normal because my virtual machine has them as well and I separated the /root and /home partitions.

/tmp uses RAM, not disk. (Well, technically perhaps swap as well.)

You can suppress tmpfs entries with the -x flag:

df -x tmpfs

For future reference, maybe under this same subforum, as a new thread.

Thank you for such quick and easy-to-understand answers!

the reason I was asking is that the df output does not make sense to me.
This is a 1TB nvme0n1p2 drive (usable space 931.5GB ish)

it says Size, used and free space =
Disk size = 882GB
Used space = 300GB
Free space = 538GB
Swap = 32GB (it does not show my swap partition for some reason)

300GB + 538GB + 32GB = 870GB–> how big the disk size should be according to these numbers?

882GB - 300GB - 538GB - 32GB = 12GB → Difference between reported disk size and adding used, available, and swap disk size together.

882GB - 300GB -32GB = 550GB → Difference between actual disk size and adding used and swap space toghter.

931.5GB - 882GB = 49.5GB → Difference between actual size and what “df” is reporting

931.5GB - 300GB - 32GB = 599.5GB → Difference between what actual disk size is and what “df” is reporting as used space plus my swap partition.

I know I’m probably way off, on how this actually works with my thinking/reason behind this question! :thinking:

Should I not have a minimum of at least 12GB-ish more free space (best case scenario)
or a minimum of at least 61.5GB-ish more free space (worst case scenario)

If I’m so far out on the field that it’s not funny just tell me and I will go back to studying and researching this on the interweb. :joy:

There’s a lot that might factor into it, but it’s a discussion for elsewhere (a new thread), and might not do too well in here (mostly a support forum).

Unless you’re wondering why you’re missing space and how to “reclaim” it. :stuck_out_tongue:


It depends on whether the program reports in binary or decimal. Drive manufacturers advertise in decimal, while Linux reports in binary (usually).

Ext4 reserves a portion for superuser reserved blocks (to mitigate against fragmentation and to prevent the system from being unbootable due to hitting 100% capacity.) Not sure how relevant that is today (fragmentation is not an issue for SSDs), but for example, the default parameter upon creating an Ext4 file-system is to reserve 5% for this superuser block space.

Five percent out of a 1 TB disk a lot of space if you ask me.

One percent is much more sane, or even zero percent if it’s used strictly for archival storage purposes.


We should probably leave it at that. :wink: Going off topic now.

I see, than you for educating me, I try to learn Linux as much as I can and as fast as I can :slight_smile:
The reason I was asking about this was I wanted to know how to reclaim it or if it was possible to reclaim it. This is not a huge deal but if I can get an extra 50GB of storage I would talk it:) I’ll open a new topic for this.

It’s possible to use tune2fs to change this reservation. However, I’m not sure if it’s safe to run on a mounted file-system. You might have to reboot into a live USB session and issue,

sudo tune2fs -m 1 /dev/nvme0n1p2

The "-m 1" means 1%. Upon creation of a new Ext4 file-system, mkfs.ext4 defaults to "-m 5" (hence 5%).

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.