Btrfs full harddisk but still free disk space

Hello,
I have the issue that “sudo df -h” lists a size of my /home partition of 326G, with 271G used but 0 available. The usage is listed with 100% and all apps are also complaining about 0 Bytes left on the disk.
But obviously this should not be the case. It also seems like there is a point where the space just jumps from about 30G free to 0 more or less instantly (I think previously I have been able to get 30G free again by removing some small files or restarting the system). I now removed some stuff so it now seems to be at least 50G unused but it didn’t help to get this space listed as free this time (so e.g. thunar is still complaining about 0 Bytes free).
I already restarted the PC but this also didn’t help (so I assume no hanging space reservations by some faulty application?)
Do you have any suggestions what I could try? With tools like baobab or ncdu (run as root) I didn’t find any big files taking up that space (and as the numbers of df imply, the space should not be in use).


Ah… again it fixed itself (while writing this post) with now listing 56G as free and 83% usage.
Do you have any idea what could cause this issue?

Edit:
Mount options for the partition:
btrfs defaults,noatime,space_cache,ssd,compress=zstd,commit=120 0 2
which result (mount command) in rw,noatime,compress=zstd:3,ssd,discard=async,space_cache,commit=120,subvolid=5,subvol=/

Edit2:
And… it’s gone again - 0 Bytes free (seems to have happened instantaneously again while I updated this answer with the mount options) but still only 271G of 326G in use.

First of all, don’t use df on btrfs, because it won’t give you the correct numbers due to btrfs using copy-on-write and inline compression. Use “btrfs filesystem” instead. See… :point_down:

man btrfs-filesystem

Note: The man pages for btrfs are all composed of two words connected with a hyphen, but the command to use at the shell prompt is “btrfs” with the second word as an independent parameter. :wink:

Secondly, the scenario you’re describing is the result of the way btrfs allocates space. It divides the available space in “zones” of — I believe — 1 GiB. This can then under certain circumstances indeed lead to the available number of free extents and/or inodes within a zone to max out while you still have ample space on your drive.

The solution is to balance the filesystem, for which there is the command “btrfs balance” — see… :point_down:

man btrfs-balance

Again, there is a hyphen in the name of the command for viewing the man page, but the command at the prompt is without the hyphen.

NAME
       btrfs-balance - balance block groups on a btrfs filesystem

SYNOPSIS
       btrfs balance <subcommand> <args>

DESCRIPTION
       The  primary  purpose of the balance feature is to spread block groups across all devices so they match constraints defined by the respective profiles. See mkfs.btrfs(8) section PROFILES for more details.  The scope of the bal‐
       ancing process can be further tuned by use of filters that can select the block groups to process. Balance works only on a mounted filesystem.  Extent sharing is preserved and reflinks are not broken.   Files  are  not  defrag‐
       mented nor recompressed, file extents are preserved but the physical location on devices will change.

       The  balance  operation is cancellable by the user. The on-disk state of the filesystem is always consistent so an unexpected interruption (e.g. system crash, reboot) does not corrupt the filesystem. The progress of the balance
       operation is temporarily stored as an internal state and will be resumed upon mount, unless the mount option skip_balance is specified.

[...]
3 Likes

Ah, thank you. I will look into that balancing.
It is literally alternating right now from about 56GB and 0 Bytes before my eyes between F5 presses in Thunar. :-/
Is there a command that shows if the system is unbalanced or if there are any zones that require this balancing?

By the way: btrfs filesystem df is also listing the 266GiB used of 321GiB total.

man btrfs-filesystem

Look at the show and usage subcommands, and when checking the output of the commands, look at the amount of allocated versus unallocated space. :wink:

There’s also a very interesting wiki page about btrfs maintenance:

Take a look specially at the yellow text boxes.

2 Likes

Try sudo btrfs filesystem usage to see what is using all of the space.

I recommend reading of:

and

:footprints:

1 Like

Argh… yes. That looks bad.
The metadata seems to be full (minus the 500MB btrfs-reserve) and also (or maybe because of this) the unallocated space is non-existent.

Data, single: total=321.55GiB, used=266.15GiB
System, single: total=4.00MiB, used=64.00KiB
Metadata, single: total=4.01GiB, used=3.50GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
    Device size:		 325.56GiB
    Device allocated:		 325.56GiB
    Device unallocated:		   4.00KiB
    Device missing:		     0.00B
    Device slack:		   3.50KiB
    Used:			 269.64GiB
    Free (estimated):		  55.41GiB	(min: 55.41GiB)
    Free (statfs, df):		  55.41GiB
    Data ratio:			      1.00
    Metadata ratio:		      1.00
    Global reserve:		 512.00MiB	(used: 0.00B)
    Multiple profiles:		        no

Unfortunately the balancing also doesn’t work as there is “No space left on device”.
Hmm. I will try to remove or move more data and then try to balance it. Also I will look through the articles you linked. FYI: The -musage=0 or -dusage=0 as it’s suggested on some pages “worked” without complaining about a full hard disk but didn’t do (or fix) anything (0 relocated chunks).

EDIT:
Ok… again it seems to have “fixed” itself. After about 30 minutes where I didn’t do anything, it seems that I have now 5GiB of metadata available in total… I was able to run a balancing. About 20GiB are now available again. Hmm… it also removed some of the “single-data size”. I think I will play around with this a bit. Thank you so far.

Data, single: total=300.55GiB, used=265.98GiB
System, single: total=32.00MiB, used=64.00KiB
Metadata, single: total=5.01GiB, used=3.49GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
    Device size:		 325.56GiB
    Device allocated:		 305.59GiB
    Device unallocated:		  19.97GiB
    Device missing:		     0.00B
    Device slack:		   3.50KiB
    Used:			 269.47GiB
    Free (estimated):		  54.54GiB	(min: 54.54GiB)
    Free (statfs, df):		  54.54GiB
    Data ratio:			      1.00
    Metadata ratio:		      1.00
    Global reserve:		 512.00MiB	(used: 0.00B)
    Multiple profiles:		        no

EDIT 2:
After running the balancing again the 5GiB of Metadata returned to 4GiB… I don’t know why it did that but I hope this will not lead to the same issue right again.

Which parameters (-musage, -dusage) have you used for the balance command?

Metadata size is dynamic, expands when needed. Your real problem I think is “Device allocated” size.

There is something odd !

I have RAID myself and can’t prove it, but as far as I can remember,
Metadata has ALWAYS been 2.00 for me (even without RAID)!

    Data ratio:			     1.00
    Metadata ratio:		     2.00

1.00 is not the default and in my opinion quite risky.

The following lines are particularly important:

Always keep at least 5GB unallocated (for ballance to be able to work)

This means the file system is 83% full. If it is 80% or higher, you should think about making changes. btrfs needs the 20% to breathe. If it is missing, it will lead to serious problems sooner or later.

:footprints:

3 Likes

I was running with -musage=50 and -dusage=50. (Before that I tried -musage=0 and -dusage=0)

Yes, I know that it should not be above 80%. I am already trying to reduce it. :wink:

  • Then make a second run with -dusage=75, then another with 90.
  • I recently read that it does not seem to be advisable to use -musage on an SSD

:footprints:

2 Likes

I just realised that also my root disk (about 70% full) only had 1MiB unallocated space. This time I was lucky that this seems to have been enough for the balancing to run.
Thank you all. I never realised before that btrfs could require these maintenance calls from time to time.

I forgot… have you got snapshots in your disk? May be helpful to delete the ones you no longer need.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.