Manjaro offers RT kernels, so I’m wondering why do people use RT kernels on a desktop. I was always under the impression that these RT kernels were meant mainly for embedded systems, which need to finish tasks and jobs on time in the order of milliseconds, maybe microseconds, with minimal jitter. But this doesn’t really apply to desktop usage, does it?
Is there any particular advantage to using RT kernels for certain applications that people need a desktop for? The only thing I can think of is multimedia and audio, but I personally found the regular kernel good enough for these things.
RT kernels are specifically intended for professional audio and video production, where any kind of latency is undesirable. For anything else, stick to a regular kernel, and unless you’ve got bleeding-edge hardware requiring the latest stable kernel, it is recommended to use one of the LTS kernels.
Note: The upcoming 6.12 kernel will become the newest addition to the LTS range as soon as 6.13 gets declared stable, and as of 6.12, real-time functionality will now be part of all kernels, albeit that it must be enabled by way of a boot parameter. In other words, from 6.12 onward, upstream will no longer be issuing any separate RT kernels.
No, because there are too many things in the kernel that are timing-sensitive. The RT kernels are likely to disrupt the kernel’s ideal scheduling strategies, and as such, they may potentially introduce instability.
You really need to understand a lot about Linux scheduling to make it worth while. What you spend on tuning an application for performance, will have to take away from other things.
All the regular kernels use the CFS (Completely Fair Scheduling) scheduler. When you use the RT kernel and the preemptive scheduler, you can have cores do very different things, you handle interrupts the way you want, and much more. But this is all configured by you.
Audio and video production was used as an example, but I think telecommunications was the first primary driver. But this goes into many other industries: manufacturing, aerospace, military, and many more.
A lot of real life scenarios could have an application that may have 100 μs latency, but a RT kernel could get this under 2 μs. Now this one application is 50x more responsive, but the rest of the OS may run a little differently, and on a desktop it could introduce massive stuttering.
It will probably be the opposite for a regular user (not specific use case), despite the name letting you think “real time is faster”, some operation could actually become “slower”.