Swap consumed despite of swappiness=0

Prequel: I have implemented a small swap partition (after reading a lot pro/con) of only 1 GB (the system has 16 GB RAM), for some possible advantages which I don’t remember any more, but at the time I made the decision to have a (not suspend-to-disk-ready sized) swap partition (I think it was because some said, a small swap partition does not harm even when having enough RAM and if somehting goes awkward eating RAM to the end, you may hafe some time to intervene while the system continues to then eat swap).

Issue: Presently, the swap partition just “dribbles” full and the system becomes “sloppy”. I have set the swappiness to 1 (didn’t help) and now even to 0, but even with a swappiness of zero, the effect of slowly filling up swap (and then behaving sloppy) persists. Why? How can I identify which process is filling up swap instead of using RAM.

My actual swappiness “workaround” is to just swapoff -a (and maybe I just opt to disable swap entirely). With swap turned off I do not detect anything that eats up consecutively more RAM, so whatever was dribbling the swap full does apparently not do this to the RAM. Any ideas?

Never set swappiness=0 - it much better to set simply sudo swapoff /dev/sdy. From a system perspective a little swap is always better than none as this little swap may be the fine line between a complete crash with no traces of what causes it and a more gentle breakdown were you get to rescue your unsaved work.

If you choose to do so remember to remove the entry from fstab and change the swap partitions partition type to anything but linux swap.

The answer to your last question is not easy as there is different methods to find the exact answer.

This search using my favorite search engine has several suggestions

  • I agree with everything Linux-aarhus already mentioned. Furthermore:

  • Having no swap is indeed not advisable as as the system will lock up when running out of RAM, but if that’s what you want, that’s what you’ll get and rebooting / shutting down daily is the mantra.

  • If you want to have a “one size fits most” formula:

    • If you don’t use hibernation AND RAM>2GB:

      swap = SQRT(RAM) 
    • if you do use hibernation OR RAM <=2GB:

      swap = RAM + SQRT(RAM)
    • If you don’t want any swap:

      RAM = 2*MAX(RAM_used)
  • The below for loop is the one you should be using to allocate the best amount for your use case

  • if you just want to find out what exactly is being swapped out (and might be disabled on boot) just post the output from below:

    for szFile in /proc/*/status ; do 
      awk '/VmSwap|Name/{printf $2 "\t" $3}END{ print "" }' $szFile 
    done | sort --key 2 --numeric --reverse | head --lines=25


P.S. With 16G of RAM you should be looking into zswap

1 Like

Thanks, this is helpful! I will definitely consider this information when setting up the next system of mine.
I also found a nice command line program named smem which gives out very nice info incl. who is using how much swap . Furthermore, I added to the 100-manjaro.conf the line:
So for now I keep my tiny swap partition and have implemented a low swappiness value and an increased cache-pressure. It seems to work well.

1 Like

That’s good, but this is better:

I have all my swap and cache settings in their own conf file:

cat /etc//sysctl.d/30-swap_usage.conf 
# Fabby: 2014-03-02: change "swappiness" from default 60 to 10 (theoretically only when RAM usage reaches around 80 or 90 percent)
# Fabby: 2014-03-29: lower to 5 as swapping is still occurring with low mem usage
# Fabby: 2014-11-21: Bring back up to 10 as vm.vfs_cache_pressure was introduced
vm.swappiness = 10

# Fabby: 2014-11-29: Lower vm.vfs_cache_pressure to 75%
# (once cached, probably not immediately needed any more)
# This percentage value controls the tendency of the kernel to reclaim
# the memory which is used for caching of directory and inode objects.
# At the default value of vfs_cache_pressure=100 the kernel will attempt to
# reclaim dentries and inodes at a "fair" rate with respect to pagecache and
# swapcache reclaim.  Decreasing vfs_cache_pressure causes the kernel to prefer
# to retain dentry and inode caches.
vm.vfs_cache_pressure = 75


P.S. obsolete kernel parameters line read_ahead_buffers have been deleted from this example as they’re historical by now…