How to fix screen tearing without a performance hit?

I've been looking up how to fix the system-wide screen tearing lately and the solution I've seen the most suggests to force full composition pipeline from the nvidia settings, however that's not usable for me because, while it does fix the tearing, it creates lots of stuttering and it generally halves my performance in games compared to the usual. I've only ever seen this screen tearing in program windows and games, videos are unaffected as far as I can tell.

Here's my xorg config

# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings:  version 430.40

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
    Option         "Xinerama" "0"
EndSection

Section "Files"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"
    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Samsung T24D391"
    HorizSync       15.0 - 81.0
    VertRefresh     24.0 - 75.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 970"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "Stereo" "0"
    Option         "nvidiaXineramaInfoOrder" "DFP-1"
    Option         "metamodes" "nvidia-auto-select +0+0"
    Option         "SLI" "Off"
    Option         "MultiGPU" "Off"
    Option         "BaseMosaic" "off"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection
inxi -Fxxxza --no-host

Have you tried with boot option
nvidia-drm.modeset=1

What about putting
MODULES=(nvidia nvidia_modeset nvidia_uvm nvidia_drm)
in
/etc/mkinitcpio.conf
then running
mkinitcpio -P

Unfortunately neither of those even fixed it. Here's my inxi anyway

System:    Kernel: 4.19.66-1-MANJARO x86_64 bits: 64 compiler: gcc v: 9.1.0 
           parameters: BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=5f400f09-16d6-4854-9430-b55c174cf831 rw 
           resume=UUID=914c4ddb-80b9-4486-ad3e-7a32c2ec785b irqpoll 
           Desktop: KDE Plasma 5.16.4 tk: Qt 5.13.0 wm: kwin_x11 dm: SDDM Distro: Manjaro Linux 
Machine:   Type: Desktop System: Gigabyte product: AB350M-HD3 v: N/A serial: <filter> 
           Mobo: Gigabyte model: AB350M-HD3-CF v: se1 serial: <filter> UEFI: American Megatrends v: F4 date: 08/24/2017 
CPU:       Topology: 8-Core model: AMD Ryzen 7 1700 bits: 64 type: MT MCP arch: Zen family: 17 (23) model-id: 1 stepping: 1 
           microcode: 8001126 L2 cache: 4096 KiB 
           flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3 svm bogomips: 95860 
           Speed: 1425 MHz min/max: 1550/3000 MHz boost: enabled Core speeds (MHz): 1: 1346 2: 1342 3: 2250 4: 2255 5: 1345 
           6: 1346 7: 1346 8: 1346 9: 1346 10: 1347 11: 1345 12: 1347 13: 2291 14: 2446 15: 1347 16: 1346 
           Vulnerabilities: Type: l1tf status: Not affected 
           Type: mds status: Not affected 
           Type: meltdown status: Not affected 
           Type: spec_store_bypass mitigation: Speculative Store Bypass disabled via prctl and seccomp 
           Type: spectre_v1 mitigation: usercopy/swapgs barriers and __user pointer sanitization 
           Type: spectre_v2 mitigation: Full AMD retpoline, STIBP: disabled, RSB filling 
Graphics:  Device-1: NVIDIA GM204 [GeForce GTX 970] vendor: Gigabyte driver: nvidia v: 430.40 bus ID: 09:00.0 
           chip ID: 10de:13c2 
           Display: x11 server: X.Org 1.20.5 driver: nvidia compositor: kwin_x11 resolution: 1920x1080~60Hz 
           OpenGL: renderer: GeForce GTX 970/PCIe/SSE2 v: 4.6.0 NVIDIA 430.40 direct render: Yes 
Audio:     Device-1: NVIDIA GM204 High Definition Audio vendor: Gigabyte driver: snd_hda_intel v: kernel bus ID: 09:00.1 
           chip ID: 10de:0fbb 
           Device-2: Advanced Micro Devices [AMD] Family 17h HD Audio vendor: Gigabyte driver: snd_hda_intel v: kernel 
           bus ID: 12:00.3 chip ID: 1022:1457 
           Device-3: Logitech [G533 Wireless Headset Dongle] type: USB driver: hid-generic,snd-usb-audio,usbhid bus ID: 1-2:5 
           chip ID: 046d:0a66 
           Sound Server: ALSA v: k4.19.66-1-MANJARO 
Network:   Device-1: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet vendor: Gigabyte driver: r8168 v: 8.047.02-NAPI 
           port: f000 bus ID: 05:00.0 chip ID: 10ec:8168 
           IF: enp5s0 state: up speed: 100 Mbps duplex: full mac: <filter> 
           IF-ID-1: tun0 state: unknown speed: 10 Mbps duplex: full mac: N/A 
Drives:    Local Storage: total: 4.09 TiB used: 3.37 TiB (82.4%) 
           ID-1: /dev/sda vendor: Seagate model: ST2000DM006-2DM164 size: 1.82 TiB block size: physical: 4096 B logical: 512 B 
           speed: 6.0 Gb/s rotation: 7200 rpm serial: <filter> rev: CC26 scheme: GPT 
           ID-2: /dev/sdb vendor: Western Digital model: WD20EZRX-00DC0B0 size: 1.82 TiB block size: physical: 4096 B 
           logical: 512 B speed: 6.0 Gb/s serial: <filter> rev: 0A80 scheme: MBR 
           ID-3: /dev/sdc vendor: Western Digital model: WDS500G2B0B-00YS70 size: 465.76 GiB block size: physical: 512 B 
           logical: 512 B speed: 6.0 Gb/s serial: <filter> rev: 00WD scheme: GPT 
Partition: ID-1: / raw size: 94.46 GiB size: 92.48 GiB (97.90%) used: 28.84 GiB (31.2%) fs: ext4 dev: /dev/sdc7 
           ID-2: swap-1 size: 5.00 GiB used: 0 KiB (0.0%) fs: swap swappiness: 60 (default) cache pressure: 100 (default) 
           dev: /dev/sdc6 
Sensors:   System Temperatures: cpu: 47.8 C mobo: N/A gpu: nvidia temp: 49 C 
           Fan Speeds (RPM): N/A gpu: nvidia fan: 45% 
Info:      Processes: 347 Uptime: 5m Memory: 15.68 GiB used: 1.57 GiB (10.0%) Init: systemd v: 242 Compilers: gcc: 9.1.0 
           Shell: bash v: 5.0.7 running in: konsole inxi: 3.0.35 

Bump, any ideas? Is there no way other than taking a performance hit and fixing the tearing via the nvidia settings?

Please do not bump your own threads. It is against forum rules.

What have you tried so far? Have you done more research since first posting? Searched this forum? There are countless threads about this topic on this forum alone, dedicated to this topic or otherwise. If anyone ever mentions nvidia screen tearing on any thread, someone is probably giving advice. And yes, there are other ways to get rid of screen tearing on KDE without force full composition pipeline. KDE is actually one of the few desktops where this is possible. Try searching for "nvidia tearing kde".

First off try updating to kernel 5.2.8.
If that doesn't work:

  1. Create a file called /etc/profile.d/kwin.sh and add the following lines.

#!/bin/sh

export KWIN_TRIPLE_BUFFER=1

  1. If you have a high refresh rate monitor, you need to set the KDE compositor to use that refresh rate. For some reason the KDE compositor can't properly detect refresh rate with Nvidia drivers. Add the following two lines to ~/.config/kwinrc in the [Compositing] section. Change 144 to whatever you've set your refresh rate to in Display settings.

MaxFPS=144

RefreshRate=144

This is what fixed the issue for me.

1 Like

Believe it or not this did actually help me a bit, particularly this part here

I never found any of the results this search provided for some reason, even though I thought my keywords weren't that unreasonable.

I also tried this

Which for some reason sometimes worked, sometimes not. Right now I'm testing a combination of solutions because I think it might provide better performance. I've used the nvidia-settings to force full composition pipeline and saved it to my config file, and I've also disabled vblank and put export KWIN_TRIPLE_BUFFER=1 inside one of my boot scripts. It does obviously eliminate the tearing because of the composition pipelining, but I'm more importantly trying to see if the other 2 changes fix the performance impact of the pipelining.

It is better to test one at a time. KWIN_TRIPLE_BUFFER=1 should make ffcp (force full composition pipeline) unnecessary. But Nvidia vblank should be enabled.

KDE compositor scale method should be "Smooth".

Also try (as a pair, but not in combination with KWIN_TRIPLE_BUFFER=1):
export __GL_YIELD=USLEEP
export __GL_MaxFramesAllowed=1

Try kwin-lowlatency from AUR.

AFAIK not recommended anymore since a few years already.
I also think that KWIN_TRIPLE_BUFFER is deprecated now - I don't need it anymore for a tear free experience.

If you use FFCP, do not use any of the above.
Ideally though, you shouldn't use FFCP due to its drawbacks.

I would do the exact opposite: throw out FFCP, and enable sync to vblank, and try with triple buffer (see above though).

IMHO nothing will fix that.

Right now my default compositor is set to opengl 3.1 on smooth, with triple buffer, but vblank is off and in its place FFCP is on, because otherwise the tearing still appears. It'll need more testing, but on the current game that I'm playing, there doesn't seem to be any performance difference between FFCP on and off with the current settings.

Did you try kernel 5.2.8?

Yeah I forgot to mention that's the one thing I haven't tried cause I don't want to stray away from the latest recommended LTS kernel.

Give kwin-lowlatency a try. For some people it helps a lot.

1 Like

Just do it, It's stable, if it doesn't work then go back.

Is there something I need to configure for it? I tried installing it from the AUR (it was extremely slow), but on its default settings the tearing only becomes worse.

As a whole, I still haven't found a perfect fix for this. So far I've only discovered that with triple buffering and vblank turned on, without pipelining, the tearing seemingly disappears in the desktop, while in my current game I need to activate full vsync from the settings. Adaptive vsync from there does help a bit but it's not perfect.
Meanwhile after I changed to kwin-lowlatency with the current settings, the tearing only got worse ingame. Should I go back and configure something in it?

Update: Ok well I've played around with it and under these settings


It does definitely help a lot, probably achieves better results than the normal kwin, but it still doesn't eliminate tearing completely. However this is where I get a little confused. Right now my game's vsync is on adaptive, whereas if I set it to off the screen tearing gets as worse as it can get. Meanwhile if I set it to on, all forms of tearing disappear... at the expense of massive performance drops in some areas. So atm I'm wondering if the final tiny bit of tearing present in my game is actually from the game itself or not.

If you use kwin-lowlatency, disable every tweak you had previously enabled. No FFCP, no triple buffer, no nothing, except sync to vblank.

Then try experimenting with the compositor settings, especially the last one (Vsync mechanism).

Forum kindly sponsored by Bytemark