NVIDIA 435 driver looks exciting

i tried using the intel driver instead of the modestting driver, the desktop loads fine but entirely on the intel driver and i couldnt get offloading to work. (nvidia gpu on, but useless)

have you been able to do this and not lose the offload capability. im not sure it has the ability to do this though.

I am using this config now and offload works like it is used to be, with those __bla-bla-bla environment variables.

1 Like

Never said otherwise.
I actually forgot that I had kwin-lowlatency installed (it was for testing purposes only) until I double-checked everything again.

kwin-lowlatency was created because the default kwin presented issues with tearing and stuttering for some users, which kwin-lowlatency seemingly fixed.

Anyway, we're back to normal.

EDIT: it's much better now, but I still think that 430 + kwin-lowlatency was quicker. Now with 435 + default kwin things like scrolling are not quite as good as they were before (but acceptable).
Let's see what the dev will say.

switching to intel's i915 driver with the TearFree option seems to have fixed the tearing problem on my end, many thanks
Why is that the 2 BusID lines are commented out? Do we only need the identifiers? Also, what about glamor?
I've only recently looked at using Linux on my laptop since I just got vaapi working on chromium (finally) and just in time for this and Proton reaching a somewhat usable state too

can you post these for reference. if this works i'll have to give it another try :+1:

inxi -Gxxz
nvidia-smi

Why is that the 2 BusID lines are commented out? Do we only need the identifiers?

Those were initially introduced in order to fix issues that were observed by the author of the original config. As it turned out that I have no need it those, I just commented them, but left just for such cases when someone sees it is possible to set them explicitly if necessary.
So, some people may need them, some (like you and me) don't.

Also, what about glamor?

That's the thing I know nothing about.

@dglt,

┬─[openm@reiwa:~/Devel/bios/ACPI]─[23:18:20]
╰─>$ inxi -Gxxz
Graphics:  Device-1: Intel UHD Graphics 620 vendor: Xiaomi driver: i915 v: kernel 
          bus ID: 00:02.0 chip ID: 8086:5917 
          Device-2: NVIDIA GP108M [GeForce MX150] vendor: Xiaomi Mi Notebook Pro driver: nvidia 
          v: 435.17 bus ID: 01:00.0 chip ID: 10de:1d12 
          Display: x11 server: X.Org 1.20.5 driver: intel,nvidia compositor: kwin_x11 
          resolution: 1920x1080~60Hz, 1920x1080~60Hz 
          OpenGL: renderer: Mesa DRI Intel UHD Graphics 620 (Kabylake GT2) v: 4.5 Mesa 19.1.4 
          compat-v: 3.0 direct render: Yes 
┬─[openm@reiwa:~/Devel/bios/ACPI]─[23:18:21]
╰─>$ nvidia-smi
Sun Aug 18 23:18:26 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 435.17       Driver Version: 435.17       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce MX150       Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   43C    P8    N/A /  N/A |     14MiB /  2002MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                              
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1311      G   /usr/lib/Xorg                                 14MiB |
+-----------------------------------------------------------------------------+
┬─[openm@reiwa:~/Devel/bios/ACPI]─[23:22:19]
╰─>$ env __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia inxi -Gxxz
Graphics:  Device-1: Intel UHD Graphics 620 vendor: Xiaomi driver: i915 v: kernel 
          bus ID: 00:02.0 chip ID: 8086:5917 
          Device-2: NVIDIA GP108M [GeForce MX150] vendor: Xiaomi Mi Notebook Pro driver: nvidia 
          v: 435.17 bus ID: 01:00.0 chip ID: 10de:1d12 
          Display: x11 server: X.Org 1.20.5 driver: intel,nvidia compositor: kwin_x11 
          resolution: 1920x1080~60Hz, 1920x1080~60Hz 
          OpenGL: renderer: GeForce MX150/PCIe/SSE2 v: 4.6.0 NVIDIA 435.17 direct render: Yes
1 Like

well done :star_struck:

your config works perfectly with the intel driver, no more modesetting nightmare. i obviously wrote my config wrong when i tried it. no more tearing. :+1:

1 Like

@dglt
Hey man, you are welcome :hugs:
I dropped it yesterday but you might have missed that (this thread is a mess).

By the way, "glamoregl" loads if using modesetting as Driver, and not if using intel:
$ cat /var/log/Xorg.0.log | grep -E "glamoregl"
[ 22.677] (II) Loading sub module "glamoregl"
[ 22.677] (II) LoadModule: "glamoregl"
[ 22.677] (II) Loading /usr/lib/xorg/modules/libglamoregl.so
[ 22.686] (II) Module glamoregl: vendor="X.Org Foundation"

But this fact has no impact on actual performance of offloading. Weird.

that not a good name command , think we need an alias
__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=

1 Like

Followed your procedure and everything seems to work fine here. KDE, intel/nvidia (obviously) and linux5.2.
Fans are a little noisy but it should be for the nvidia card always on, hoping that in future they will add the possibility to fully power off the card on non-turing too.

Thanks for sharing. I'll continue to testing and report if something change.

most definitly, i already added vkr for vulkan and nvr for opengl to my .zshrc .

alias nvr="__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia"
alias vkr="__NV_PRIME_RENDER_OFFLOAD=1"

vkr vkcube and nvr glxgears both working, running on render offload on the intel driver. no more tearing now that @openminded figured out using the intel driver instead of modesetting. which i previously forgot to say thank you btw.

3 Likes

What's with the intput lag?
I'm using nvidia-xrun right now.
For most games I use drm_modeset=1 but when it comes to shooters where every microsecond counts, I have to disable it, because it brings a noticeable input delay with it which makes shooters basically unplayable.
So modeset 1 means no input delay but tearing and without it means smooth frames without being able to play competitive.
On Windows I could offload and have zero delay and butter smooth frames.
If this new method doesn't bring this to the table but introduces new problems, I'll stick to nvidia-xrun.

1 Like

so far, any time i've even tried to use drm modeset i wasnt able to get xorg to start. i only tried loading 1 game so far on my install i have this setup on, borderlands 2 and it's performance on render offload was not so great, but to be fair i didnt put much effort into fixing it either, it was just a quick functionality test.

@openminded one thing i did notice was that offloading to the intel driver instead of modesetting, glxgears runs fine, but glxgears_pixmap shows it's usual thousands of FPS but the window is black, no image at all. can you confirm this when you get a chance.

Yeah, same here. Is it critical? I've tried only 2 games - a native one with Vulkan (Dota2) and Proton-powered OpenGL another one (old Warhammer 40K Dark Crusade). Both were working fine.

im not exactly sure what the difference is between glxgears and glxgears_pixmap but the only game i've tested (briefly) was native borderlands 2 and it was struggling on medium/low settings. on my prime setup it runs great on high settings. i'll have to try different intel driver options, "TearFree" comes at a cost on the intel driver, and prime uses modesetting but it's able to use prime sync so it fixes the tearing issues i have with the modesetting driver. i was not able to get prime sync to work with render offload, enabling it always ended in xorg failing to start.

if anyone is using these new drivers, could you try running a vulkan game with vsync enabled and see if it's working properly.

i'm using a PRIME setup, not this new "render offload" feature but i think it has something to do with the combination of the new 435.* drivers + vulkan + vsync. results in an unusable setup. further explanation

I got 435 + vulkan + vsync. Im on gnome and modesetting driver with nvidia offload. Everything is fine at here ^^

the problem happens with the combination of prime+vulkan+vsync on 435 drivers. render offload, while a good thing that nvidia is doing something for hybrid graphics on linux is also not very practical because even when your not using the nvidia gpu it has to remain powered on because the xorg process needs to be run on the nvidia for offload to work.

for everyday use you end up using the power cost of 2 gpu's while only getting the benefit of 1 unless you run the program using the launch variable.

i dont have concrete benchmarks but comparing the performance between prime and offload also didnt fare well for render offload.

red faction: remarstered on render offload and no vsync struggled to get 50+ fps while on prime with vsync disabled it easily hits 70+ fps. (identical settings, stock clock/mem)

offload also will not work with coolbits, so no fan or clocking capabilities. aside from the novelty of it being the latest shiny & new thing available, it's really nothing to get excited over and was more disappointing than anything else in it's current state so i'll be sticking to a prime setup and possibly add the functionality of a third mode to optimus-switch for this render offload so i can easily switch back and forth to check the progress on it if any.

2 Likes

I would also like to test how this works but my hardware is old (nvidia390xx).

I thought it was supposed to replace the "bumblebee" method, without the auto-select feature, like using optirun/primusrun. Similar... I think I've read some nvidia notes with Power Saving configuration on the new driver feature. Maybe the power consumption is better controlled? Wishes... I know... :disappointed_relieved:

1 Like

Turing cards are suppose to have better power management to use less power, but can never be powered off (using offload) because xorg needs to run on it.

dont worry, your not missing much. i'm hoping for a true on-demand optimus setup like on windows but this is definitely not it.

if the card remaining on was the only downside i wouldnt care, but losing performance, losing coolbits, on top of having the nvidia gpu powered on doing nothing but wasting power until launching with a parameter is silly. with prime both cards remain on, performance is better, no launch variables are needed.

on the upside with this new render offload it seems the black screen issues that come with fumblebee are a non issue so making it the default setup for 435.xx driver compatible installs makes perfect sense. even when removing any video .conf file from /etc/X11/xorg.conf.d i was still met with a desktop and thats much better than i can say for crumblebee :wink:

2 Likes

Forum kindly sponsored by Bytemark