Kernel 5.3 more prone to swapping than 5.2 and earlier?

Given that the 5.2 kernel branch is now marked EOL, I've switched to 5.3 as of the most recent major update, and one thing that stands out right away is that my system has now begun swapping far earlier than before, and in far greater quantities too. This machine here has 8 GiB of RAM installed, and I'm only keeping the same applications open as I normally do, which are...

  • KMail
  • KVIrc
  • JuK
  • Dolphin
  • Claws Mail (for Usenet access only)
  • KSysGuard

Apart from the above, my biggest memory hog is most likely the Chromium browser, but I don't have that open all the time.

With the 5.0, 5.1 and 5.2 kernels, I would occasionally get about 256 to 512 KiB of swap (and peaking at about 10 MiB) if I were looking at a few heavy websites ─ e.g. store.kde.org, or a thread with many embedded YouTube videos at one of the forums under my care. But ever since I started using kernel 5.3, I am regularly seeing up to 300 MiB of swap in use ─ right now it's at 221 MiB again, but just a few minutes ago it was over 330 MiB.

Don't get me wrong: the swapping doesn't appear to have any impact on performance that I can tell ─ everything's still snappy as ever. But my swap partition is on an SSD, and therefore I consider swapping undesirable for the lifetime of the SSD. I know, modern SSDs can take quite a beating in terms of their write cycles, but I'd still rather avoid it if possible.

The weird thing is that I have vm.swappiness set to "0" ─ yes, I know that's recommended against, but on a machine with 8 GiB of RAM, it shouldn't pose any real hazard ─ and I also have vfs.cache_pressure set to "200".

Did anything change between kernel 5.2 and 5.3 on account of the settings for reducing the tendency to swap, or is there perhaps another player involved ─ (cough) systemd (cough)? ─ that decides whether things should be swapped out?

All I can say in that regard was that my previous machine ran PCLinuxOS ─ which does of course not use systemd ─ in only 4 GiB of RAM, and that, with the same applications open as I am using now, it was extremely rare for me to hit swap on that machine. So why do I now have a quarter of a Gigabyte worth of data in my swap, on a machine with twice the amount of RAM that my previous box had? :thinking:

2 Likes

I notice same thing; right now 290M of swap used while
while 12621M of RAM available (16GB thinkpad here).

2 Likes

Is there a specific reason why you use that setting?

On topic: I haven't noticed a difference yet, but my usage has so far been very light. I'll keep an eye on it.

2 Likes

The default is 100. I've simply doubled it. It causes the kernel to release inode dentry cache data from memory sooner. The lower the setting, the more cache it keeps in memory, and thus the more it'll be prone to swapping.

Hmm well I prefer a smaller value like 50 :wink:
Isn't the setting more like a ratio, a lower value simply favours inode/dentry cache over pagecache, and vice versa for a higher value?

At the default value of vfs_cache_pressure=100 the kernel will attempt to
reclaim dentries and inodes at a "fair" rate with respect to pagecache and
swapcache reclaim.

1 Like

To be honest, I have no idea. I sometimes get the impression that certain documentation was not only written by computer scientists and engineers, but also for computer scientists and engineers. :man_shrugging:

2 Likes

Noticed a different behavior too. In the past the disk cache would shrink much smaller before swapping started. Now with a 40% Disk Cache on 16 GB it started swapping like crazy (only putting a few hundred MB in there but constantly increasing and shrinking). Adjusting vm.swappiness didn't seem to have any observable effect either. I run without swap for now.

1 Like

It could be these swap related patches that were introduced in July:
Not sure about it though.

mm, swap: fix race between swapoff and some swap operations
mm/swap_state.c: simplify total_swapcache_pages() with get_swap_device()
mm, swap: use rbtree for swap_extent
mm/mincore.c: fix race between swapoff and mincore

Plus many changes to the mm infrastructure.

2 Likes

I've gone back to 5.2 for now. In 5.3 I just have to boot the machine and start copying a large amount of data from A to B to fill the disk cache which spills over into swap with the swap constantly working and making the whole machine feel very janky. From my testing 5.4 doesn't look any better so far.

1 Like

Try kernel 4.19 LTS and if your hardware works fine with it you can stick with it until at least 2023.

Well, I've gone the other route. I've just upgraded my computer from 8 GiB to 16 GiB of RAM, and now it doesn't swap erratically anymore. :man_shrugging:

I have 16 GB - but itβ„’ eats all of it. :neutral_face:

I have considered upgrading to 32 GB, but it feels like overkill. 16 GB should be enough, I usually need swap just for the random, sudden and devastating memory leak once or twice a week that would otherwise kills the system. If the issue doesn't settle I'll probably have to go down the "double the RAM" route and hope for the best.

1 Like

For anyone still affected by this issue, I started to use zswap. The swap is still working rather continuously on a few hundred MB, but so far it doesn't seem to hit the physical storage as before. I'm cautiously optimistic with this solution.

1 Like

Forum kindly sponsored by