One-display HDMI setup unusable (extremely slow)

I have a laptop with two GPUs, from Intel and Nvidia, have recently instlalled Manjaro (gnome) and hybrid nvidia video drivers. I want to use primarily my external monitor via HDMI and switch to the internal laptop’s one when external is disconnected. Also I want to use some kind of ‘hybrid mode’: nvidia card only for games and such.
Standard gnome system menu Displays correctly discovers both monitors and offers 3 modes (join, mirror, single), which all work. The problem is, only the modes which use internal laptop monitor work normally: join, mirror, single laptop monitor. They work and the external monitor works normally in join and mirror modes. But when I use the single external mode, it works extremely slow, all graphic changes take several seconds: selecting interface elements, switching active windows, switching focus and so on. It’s completely unusable. Only the cursor moves normally.
Another thing: when I use the mirror mode everything works well, including on the external monitor, but if I close laptop lid internal monitor switches off and I have the same problem with the external monitor.
Also, when I used command ‘nvidia-xconfig’, it created an xorg config which looks very generic:

# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 450.66

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "Files"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Unknown"
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection                              

After restarting my internal monitor didn’t work at all in ‘graphical mode’: text terminals only but no gnome. But when I plugged in external monitor, it worked well, no graphics slowing. As this is not what I want, I deleted that ‘xorg.conf’ and all went back after restarting.
Could you help me to make it work?
System information:

System:    Kernel: 5.8.6-1-MANJARO x86_64 bits: 64 compiler: N/A 
           parameters: BOOT_IMAGE=/boot/vmlinuz-5.8-x86_64 
           root=UUID=c3e3a799-be9f-4af7-a545-b6a9233bf340 rw quiet apparmor=1 security=apparmor 
           resume=UUID=9cf56e05-a59a-4620-8329-e41661d8b0dd udev.log_priority=3 
           Desktop: GNOME 3.36.6 tk: GTK 3.24.23 wm: gnome-shell dm: GDM 3.36.3 
           Distro: Manjaro Linux 
Machine:   Type: Laptop System: Dell product: Inspiron 15 7000 Gaming v: N/A serial: <filter> 
           Chassis: type: 10 serial: <filter> 
           Mobo: Dell model: 065C71 v: A00 serial: <filter> UEFI: Dell v: 1.11.0 date: 12/04/2019 
Battery:   ID-1: BAT0 charge: 65.2 Wh condition: 65.2/74.0 Wh (88%) volts: 12.6/11.4 
           model: SMP DELL 71JF452 type: Li-poly serial: <filter> status: Full 
CPU:       Topology: Quad Core model: Intel Core i7-7700HQ bits: 64 type: MT MCP arch: Kaby Lake 
           family: 6 model-id: 9E (158) stepping: 9 microcode: D6 L2 cache: 6144 KiB 
           flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 44817 
           Speed: 1000 MHz min/max: 800/3800 MHz Core speeds (MHz): 1: 1025 2: 1022 3: 1004 4: 1009 
           5: 1009 6: 1002 7: 1055 8: 1003 
           Vulnerabilities: Type: itlb_multihit status: KVM: VMX disabled 
           Type: l1tf mitigation: PTE Inversion; VMX: conditional cache flushes, SMT vulnerable 
           Type: mds mitigation: Clear CPU buffers; SMT vulnerable 
           Type: meltdown mitigation: PTI 
           Type: spec_store_bypass 
           mitigation: Speculative Store Bypass disabled via prctl and seccomp 
           Type: spectre_v1 mitigation: usercopy/swapgs barriers and __user pointer sanitization 
           Type: spectre_v2 mitigation: Full generic retpoline, IBPB: conditional, IBRS_FW, STIBP: 
           conditional, RSB filling 
           Type: srbds mitigation: Microcode 
           Type: tsx_async_abort status: Not affected 
Graphics:  Device-1: Intel HD Graphics 630 vendor: Dell driver: i915 v: kernel bus ID: 00:02.0 
           chip ID: 8086:591b 
           Device-2: NVIDIA GP107M [GeForce GTX 1050 Ti Mobile] vendor: Dell driver: nvidia 
           v: 450.66 alternate: nouveau,nvidia_drm bus ID: 01:00.0 chip ID: 10de:1c8c 
           Device-3: Sunplus Innovation Integrated Webcam type: USB driver: uvcvideo bus ID: 1-12:7 
           chip ID: 1bcf:2c01 
           Display: x11 server: X.Org 1.20.8 compositor: gnome-shell driver: modesetting,nvidia 
           unloaded: intel,nouveau alternate: fbdev,nv,vesa display ID: :1 screens: 1 
           Screen-1: 0 s-res: 1920x1080 s-dpi: 96 s-size: 508x285mm (20.0x11.2") 
           s-diag: 582mm (22.9") 
           Monitor-1: eDP-1 res: 1920x1080 hz: 60 dpi: 142 size: 344x193mm (13.5x7.6") 
           diag: 394mm (15.5") 
           Monitor-2: HDMI-1-0 res: 1920x1080 hz: 60 dpi: 102 size: 476x268mm (18.7x10.6") 
           diag: 546mm (21.5") 
           OpenGL: renderer: Mesa Intel HD Graphics 630 (KBL GT2) v: 4.6 Mesa 20.1.7 
           direct render: Yes 
Audio:     Device-1: Intel CM238 HD Audio vendor: Dell driver: snd_hda_intel v: kernel 
           bus ID: 00:1f.3 chip ID: 8086:a171 
           Sound Server: ALSA v: k5.8.6-1-MANJARO 
Network:   Device-1: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet vendor: Dell 
           driver: r8168 v: 8.048.03-NAPI modules: r8169 port: d000 bus ID: 02:00.0 
           chip ID: 10ec:8168 
           IF: enp2s0 state: up speed: 1000 Mbps duplex: full mac: <filter> 
           Device-2: Intel Wireless 3165 driver: iwlwifi v: kernel port: d000 bus ID: 03:00.0 
           chip ID: 8086:3165 
           IF: wlp3s0 state: down mac: <filter> 
Drives:    Local Storage: total: 1.82 TiB used: 582.64 GiB (31.3%) 
           SMART Message: Unable to run smartctl. Root privileges required. 
           ID-1: /dev/nvme0n1 vendor: Western Digital model: WDS100T2B0C-00PXH0 size: 931.51 GiB 
           block size: physical: 512 B logical: 512 B speed: 31.6 Gb/s lanes: 4 serial: <filter> 
           rev: 211070WD scheme: GPT 
           ID-2: /dev/sda vendor: Toshiba model: MQ02ABD100H size: 931.51 GiB block size: 
           physical: 4096 B logical: 512 B speed: 6.0 Gb/s rotation: 5400 rpm serial: <filter> 
           rev: 1D scheme: GPT 
Partition: ID-1: / raw size: 285.66 GiB size: 280.18 GiB (98.08%) used: 14.37 GiB (5.1%) fs: ext4 
           dev: /dev/nvme0n1p5 
           ID-2: /home raw size: 97.66 GiB size: 95.62 GiB (97.92%) used: 16.49 GiB (17.2%) fs: ext4 
           dev: /dev/sda5 
Swap:      Kernel: swappiness: 60 (default) cache pressure: 100 (default) 
           ID-1: swap-1 type: partition size: 19.53 GiB used: 3.21 GiB (16.4%) priority: -2 
           dev: /dev/nvme0n1p4 
Sensors:   System Temperatures: cpu: 58.0 C mobo: 44.0 C sodimm: 48.0 C 
           Fan Speeds (RPM): cpu: 0 
Info:      Processes: 330 Uptime: 4d 13h 39m Memory: 15.51 GiB used: 7.24 GiB (46.7%) Init: systemd 
           v: 246 Compilers: gcc: 10.2.0 Packages: pacman: 1417 lib: 456 flatpak: 0 snap: 0 
           Shell: Zsh v: 5.8 running in: gnome-terminal inxi: 3.1.05 
> Installed PCI configs:
--------------------------------------------------------------------------------
                  NAME               VERSION          FREEDRIVER           TYPE
--------------------------------------------------------------------------------
     video-modesetting            2020.01.13                true            PCI
         network-r8168            2016.04.20                true            PCI
video-hybrid-intel-nvidia-450xx-prime            2019.10.25               false            PCI
           video-linux            2018.05.04                true            PCI


Warning: No installed USB configs!
    cat /etc/X11/mhwd.d/nvidia.conf                                                             [130]
    ##
    ## Generated by mhwd - Manjaro Hardware Detection
    ##
1 Like

You’ll experience enormous compositing lag if you want to use the intel GPU to render the GUI and use only external monitor(s). NVIDIA is aware, this is caused by Xorg, please read here.

As of yet, the only feasible way to use external monitors is to render everything on the nvidia gpu. I suggest you try optimus-manager to switch between GPUs.

1 Like

I wanted to write “but I don’t want ‘reverse PRIME’, I just want this ‘offloading’”, but then read more.
Do I understand correctly that the issue is HDMI being connected to the Nvidia GPU? So instead of ‘render everything on intel, include a little something from nvidia, then send it to the monitor’ which is PRIME offloading, we have ‘render a little or nothing on nvidia, get the rest from intel and send this to monitor’, which is reverse PRIME?

I don’t think that’s a good terminology. There are three main things: output sink, output source, PRIME render offload.

As far as I see, you want to imitate the Windows behaviour, which is: the intel gpu is the output source, the nvidia gpu is the output sink, and there is PRIME render offload available (PRIME is Linux-specific, and that’s not what it’s called on Windows, but anyway). In this configuration, the intel gpu renders (almost) everything, the nvidia gpu just provides “access” to the external monitors for the intel gpu, but parts of the screen may be rendered by the nvidia gpu via PRIME render offload (games, etc.). This is usually referred to as “reverse PRIME”. Unfortunately, this mode is unusable when the internal screen is turned off and only external display(s) are used. As of yet, on Linux.

What is usually referred to as “PRIME” is when the nvidia gpu is the output source, the intel gpu is the output sink - so everything is rendered on the nvidia gpu, and the intel gpu just provides access to the integrated display. PRIME render offload is mainly pointless in this configuration, since why would you offload rendering anything to a weaker card? Nonetheless, it should be technically possible (I guess it could be useful if you have two nvidia gpus or something similar). This works pretty well on Linux.

There is also PRIME render offload, of course, which is mostly unrelated to the output sink/source configuration.

Yes, thank you for the explanation. And to be honest, I just want this to work, however possible: to use external monitor (the laptop monitor is rather small) and render games on the nvidia GPU, the rest on intel. I had absolutely no idea about all these internal details, because on windows it just works.
Also, I read a little about optimus manager, and I’m not sure it will help me with the desired configuration: not only there’s a need to manually switch GPUs for sessions, I don’t see how the problem with the external monitor could be solved with it.

If you want,you can set up mirror display and low the brighness of your internal display,if you are using KDE you can even turn off the display with the brighness,at least in my case i don’t know if that way with every system,that way you sort of mimic the external display,thats my workaround until the nvidia issue is fixed.

Or if you want to really use external display only then yes,you need optimus manager to change from hybrid (the mode you have) to nvidia only (everything renders on nvidia),that way you can use the external display only.

1 Like

Yes, that is exactly what I did. Only in my case zero brightness doesn’t shut the lighting down. I guess it depends on the monitor.

Well, I don’t think I want to do it. I’ll just stay on windows until they fix everything. Thanks, though.

Or rather, it’s a conscious design choice not to let the user turn the brightness all the way down. You could try

echo 0 | sudo tee /sys/class/backlight/???/brightness

first run ls -l /sys/class/backlight/ to see what you need to replace ??? with.

2 Likes

Just learned the tee command from the missing semester course, and already found a use case :slight_smile:

Thanks @pobrn, now I can just mirror screens using intel without lag.

hi @pobrn, I’m kind of stalking your posts through the manjaro forums, and I wanted to ask you if you think my dream scenario is possible before I spent another few days trying to make it work.

TLDR: is it possible to use intel onboard GPU for everyday desktop rendering and but still only use the outputs of the dedicated NVIDIA GPU?

You wrote here:

As far as I see, you want to imitate the Windows behaviour, which is: the intel gpu is the output source, the nvidia gpu is the output sink, and there is PRIME render offload available (PRIME is Linux-specific, and that’s not what it’s called on Windows, but anyway). In this configuration, the intel gpu renders (almost) everything, the nvidia gpu just provides “access” to the external monitors for the intel gpu, but parts of the screen may be rendered by the nvidia gpu via PRIME render offload (games, etc.). This is usually referred to as “reverse PRIME”. Unfortunately, this mode is unusable when the internal screen is turned off and only external display(s) are used. As of yet, on Linux.

I have a workstation and not an laptop and I want my onboard Intel GPU to be used for everyday desktop and my NVIDIA dedicated GPU used for more gpu intensive tasks (like e.g. used in a VM exclusively). So a classic prime-run scenario afaik. The twist is this, I need my monitor to be connected to via the dedicated GPU display port output (because my onboard only provides HDMI1.2 and my 4K Monitor doesn’t work with that).

this is my system config:

Graphics:
  Device-1: Intel UHD Graphics 630 vendor: ASRock driver: i915 v: kernel
  bus ID: 00:02.0 chip ID: 8086:3e92
  Device-2: NVIDIA TU104 [GeForce RTX 2080 Rev. A] vendor: ASUSTeK
  driver: nvidia v: 455.45.01 alternate: nouveau,nvidia_drm bus ID: 01:00.0
  chip ID: 10de:1e87
  Device-3: Blackmagic Design DeckLink Mini Monitor 4K driver: N/A
  bus ID: 03:00.0 chip ID: bdbd:a144
  Display: x11 server: X.Org 1.20.10 compositor: kwin_x11 driver: nvidia
  display ID: :0 screens: 1
  Screen-1: 0 s-res: 2560x1440 s-dpi: 192 s-size: 339x191mm (13.3x7.5")
  s-diag: 389mm (15.3")
  Monitor-1: DP-2 res: 2560x1440 hz: 60 dpi: 93 size: 698x393mm (27.5x15.5")
  diag: 801mm (31.5")
  OpenGL: renderer: N/A v: N/A direct render: N/A

That should be possible. Have you installed the video-hybrid-intel-nvidia-455xx-prime mhwd configuration?

i just had, I played around the whole day today. but with video-hybrid-intel-nvidia-455xx-prime loaded I then could only get my onboard gpu hdmi output working. optimus-manager-qt didn’t switch from intel to nvidia, even after logout/login. I panicked (I have important work to do tomorrow) and I went back to video-nvidia-455x (which I spent most of my time on tbh). X is just so darn confusing…

I think I’m gonna sell my nvidia and get an amd next

I then remembered that this was also what happened last time. that I get stuck on this and can’t get the nvidia display port outputs to work.

do you have any tips/recommendation I should look into? I’m aware my information is quite vague, so please don’t hesitate to ask for more if that helps. thanks for your time.