Handle performance and bugs issue for Nvidia Prime Optimus switch feature

Hi everyone folks! Here I’m little question about configure my laptop graphics.
So after learning this great tutorial on Manjaro Wiki, Configure_Graphics_Card

I landed to right installed drivers for Nvidia Prime switch and look all good with this hardware:
inxi -G

Graphics:
Device-1: Intel TigerLake-H GT1 [UHD Graphics] driver: i915 v: kernel
Device-2: NVIDIA GA107M [GeForce RTX 3050 Mobile] driver: nvidia
v: 510.54
Display: server: X.Org v: 1.21.1.3 driver: X: loaded: modesetting,nvidia
gpu: i915 resolution: 1920x1080~60Hz
OpenGL: renderer: Mesa Intel UHD Graphics (TGL GT1) v: 4.6 Mesa 21.3.7
and mhwd -li -d
NAME: video-modesetting
ATTACHED: PCI
VERSION: 2020.01.13
PRIORITY: 1
FREEDRIVER: true
CLASSIDS: 0300

NAME: video-hybrid-intel-nvidia-prime
ATTACHED: PCI
VERSION: 2021.12.18
INFO: Hybrid prime solution for NVIDIA Optimus Technology - Closed source NVIDIA driver & open source intel driver.
PRIORITY: 8
FREEDRIVER: false
DEPENDS: video-modesetting
CONFLICTS: videonvidia

So I look strange behavior when testing render with glxspheres64
I tried two different commands for Prime Optimus offloading

First is DRI_PRIME= and other is prime-run but it’s get some different rendering performance and I will know why if possible to know of course, lol.

Below you can see some output for various commands combinations I tested;

DRI_PRIME=0 glxspheres64  :heavy_check_mark:

Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
GLX FB config ID of window: 0x375 (8/8/8/0)
Visual ID of window: 0x734
Context is Direct
OpenGL Renderer: Mesa Intel(R) UHD Graphics (TGL GT1)
61.696234 frames/sec - 68.852997 Mpixels/sec

DRI_PRIME=1 glxspheres64  :heavy_check_mark:  14s 

Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
libGL error: failed to create dri screen
libGL error: failed to load driver: nouveau
GLX FB config ID of window: 0x375 (8/8/8/0)
Visual ID of window: 0x734
Context is Direct
OpenGL Renderer: Mesa Intel(R) UHD Graphics (TGL GT1)
1113.364755 frames/sec - 1242.515066 Mpixels/sec

prime-run glxspheres64  :heavy_check_mark:  12s 

Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
GLX FB config ID of window: 0x138 (8/8/8/0)
Visual ID of window: 0x6e0
Context is Direct
OpenGL Renderer: NVIDIA GeForce RTX 3050 Laptop GPU/PCIe/SSE2
61.091732 frames/sec - 68.178373 Mpixels/sec

Well after this three command you see output of DRI_PRIME=1 get errors and unbelievable FPS speed, this is not fully clear for me.
(So it’s Prime offloading I assume run cuda’s units with Intel UHD rendering, you remember 3DFX vga pass trough age? lol)

Well I think this so great but in fact is buggy and unusable with games on Steam for example.
When I try to pass DRI_PRIME=1 (or without it) variable on Steam game “Black Mesa”, this game will get random crash I see every cutting scene/ loading transition, otherwise if I launch game with prime-run the game is fine, but I assume no offloading from UHD Intel, only direct Nvidia rendering enabled like glxspheres64 print and performance is still “normal” so not huge FPS amount.

Now if anyone know why this happen, is really appreciated if want to explain to me more deep about Prime Optimus working, and if possible to handling DRI_PRIME=1 without errors. For example , maybe is better go to LTS kernel? :eyes:
(Edit: or maybe is unusable because works only with noveau free driver? According to Manjaro wiki configure graphics write, this is my best guess)

Thanks in advance and best regards.

DRI_PRIME is only if you have the free drivers (nouveau) or if you have a AMD card,you need to use prime-run to use the NVIDIA card.

I think is better to measure with a game rather than glxspheres64,I don’t know how it measures it,but maybe you get this

Because of VSync,I can be wrong though.

Also,if you type nvidia-smii in the terminal,you can see what processes are using the NVIDIA card,there should be a Xorg process always in there (if you are in hybrid mode)

$ nvidia-smi
Wed Mar 23 18:38:35 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.54       Driver Version: 510.54       CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   39C    P0    N/A /  N/A |      0MiB /  4096MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2965      G   /usr/lib/Xorg                       4MiB |
+-----------------------------------------------------------------------------+

If you type for example,prime-run glxspheres64,you should see the process in nvidia-smi

1 Like

Thank ypu for kind reply, you help me today for better understanding how to work my new laptop and it’s amazing technology :smiley:

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.