NVIDIA 435 driver looks exciting

getting close... got the patched xorg and nvidia drivers installed, now to setup the the xorg configuration. right now all is good so far

Summary
[dglt@dglt-bsp ~]$ inxi -SMGxxz
System:    Host: dglt-bsp Kernel: 5.2.8-1-MANJARO x86_64 bits: 64 compiler: gcc v: 9.1.0 Desktop: i3 4.17 dm: LightDM 
           Distro: Manjaro Linux 
Machine:   Type: Laptop System: Dell product: Inspiron 7559 v: 1.3.0 serial: <filter> Chassis: type: 10 serial: <filter> 
           Mobo: Dell model: 0H87XC v: A00 serial: <filter> UEFI: Dell v: 1.3.0 date: 12/01/2018 
Graphics:  Device-1: Intel HD Graphics 530 vendor: Dell driver: i915 v: kernel bus ID: 00:02.0 chip ID: 8086:191b 
           Device-2: NVIDIA GM107M [GeForce GTX 960M] vendor: Dell driver: nvidia v: 435.17 bus ID: 02:00.0 chip ID: 10de:139b 
           Display: x11 server: X.org 1.20.5 driver: modesetting,nvidia resolution: <xdpyinfo missing> 
           OpenGL: renderer: GeForce GTX 960M/PCIe/SSE2 v: 4.6.0 NVIDIA 435.17 direct render: Yes 
[dglt@dglt-bsp ~]$ pacman -Qs xorg-server
local/xorg-server 1.20.5-2.2 (xorg)
    Xorg X server
local/xorg-server-common 1.20.5-2.2 (xorg)
    Xorg server common files
local/xorg-server-devel 1.20.5-2.2 (xorg)
    Development files for the X.Org X server
local/xorg-server-xdmx 1.20.5-2.2 (xorg)
    Distributed Multihead X Server and utilities
local/xorg-server-xephyr 1.20.5-2.2 (xorg)
    A nested X server that runs as an X application
local/xorg-server-xnest 1.20.5-2.2 (xorg)
    A nested X server that runs as an X application
local/xorg-server-xvfb 1.20.5-2.2 (xorg)
    Virtual framebuffer X server
local/xorg-server-xwayland 1.20.5-2.2 (xorg)
    run X clients under wayland

3 Likes

I guess this Nvidia driver and Xorg patch does not have reverse prime for HDMI output, right?

I got the precise instructions from Nvidia and will post them after we had established all the needed parts.

10 Likes

I have been waiting for this so many years, finally laptop support will feel complete in Linux. I hope support for this will arrive soon, I will switch back to unstable to help test it once its available.
Hopefully situation with patched xorg will be figured out soon.

4 Likes

i created a small partition just to test this, so far starting xorg with no configuration automatically defaults to prime without further intervention :+1:. xorg also works fine running with the modesetting xorg configuration they provided in the readme

modesetting per nvidia's readme
 Section "ServerLayout"
      Identifier "layout"
      Screen 0 "iGPU"
      Option "AllowNVIDIAGPUScreens"
    EndSection

    Section "Device"
      Identifier "iGPU"
      Driver "modesetting"
    EndSection

    Section "Screen"
      Identifier "iGPU"
      Device "iGPU"
    EndSection

im still trying to get xrandr to list the nvidia as a screen

xrandr --listproviders
[dglt@dglt-bsp ~]$ xrandr --listproviders
Providers: number : 1
Provider 0: id: 0x44 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 2 associated providers: 0 name:modesetting

if you need to test function on metal, let me know. this is a throwaway installation so breakage is of little concern

3 Likes

I documented what I did to make it work here: Nvidia render offloading
Look at the logs of Xorg, it should give you an idea of what went wrong.

1 Like

This is killer.

Finally, no need for bumblebee or hacky workarounds (no matter how nicely coded the tools may be, switching configuration files and restarting X is hacky).

2 Likes

That will still be needed for 340 and 390 cards.

Also, rumor is the cards are never completely powered down (in generations before Turing). This might change in the future though.

2 Likes

that should easy enough to make work with pre turing cards, even if it's using a script to load/unload modules when running/stopping nvidia specified tasks.

nice, looking it over now. :+1:

This happens because (allegedly, again) you run the Xserver on the nvidia driver.

They finally managed to allow it to accept more than a single vendor per screen.
They finally managed to allow it to accept more than a single vendor per server.
But I believe that you still cannot "pass the bucket" back and forth between the intel and nvidia driver. And you can't unload the modules gracefully if the server's fixed on them.

It's all still super new though, and for as much as I know, yes, it may even be that a stupid script could do the magic.
After all the folks over at bumblebee-project and optimus-manager have certainly seen worse ■■■■■

I pushed the needed packages to our unstable branch.


For render offload, I think there are a few pieces are needed:

  • Install xorg-server 1.20.5-2.1
  • Update to nvidia 435.17 packages
  • Don't put Option "PrimaryGPU" in the nvidia-drm OutputClass rule in /usr/share/X11/xorg.conf.d

That last one might be a bit controversial: what that flag does is force the X server to use a traditional display offload configuration when an NVIDIA GPU is present, but having that flag set gets in the way of using the NVIDIA GPU as a secondary GPU for render offload.

For render offload to actually work in the beta drivers, users need to set the "AllowNVIDIAGPUScreens" option. Nvidia recommends against setting that by default, but will enable it by default after some soak time in public drivers. See also here.

11 Likes

Very goodjob Manjaro! So glad you listen to the community!

Right now its like this by default:

Section "OutputClass"
Identifier "nvidia"
MatchDriver "nvidia-drm"
Driver "nvidia"
Option "PrimaryGPU" "yes"
Option "AllowEmptyInitialConfiguration"
ModulePath "/usr/lib/nvidia/xorg"
ModulePath "/usr/lib/xorg/modules"
EndSection

So all I have to do is change PrimaryGPU to

Option "PrimaryGPU" "no"

and it will work?

Edit: I think adding this is also needed

Option "AllowNVIDIAGPUScreens"

it's early days but so far it functions ok with some caveats.

so far, due to reasons in that post the only way im able to power down the gpu is by killing xorg running on the nvidia and unload the nvidia modules and start xorg again only on the intel gpu and the nvidia will power down.

i'll do some more playing around, can anyone else confirm or not confirm this behavior?

i'll say this though, this setup seems very forgiving. i've been throwing silly configurations at it that would certainly end up at a black screen with previous versions. even when using no xorg configuration at all it still gets to the desktop.

This is really exciting. I didn't think this would actually happen. Will this be an easy transition if Prime is already installed?

1 Like

Today's nvidia and xorg updates, but I will just wait for NEW GUIDE SWITCH FROM BUMBLEBEE TO PRIME.

1 Like

I think that's because they want Linus to hold his middle finger back.:metal:

3 Likes

so far, from what i can tell you would just remove the prime xrandr script and xorg configuration you created and it just works so at the very least is easier for newcomers to install proprietary drivers and boot and not need to deal with the bumblebee/bbswitch nonsense.

im currently focused on a way of making it so the nvidia gpu can be powered down when not used which is easily done as long as the nvidia modules can be unloaded so right now im only able to do that with kill xorg --> unload modules --> start xorg. with the modules unloaded and power management set to "Auto" the nvidia card shuts off.
the turing cards are suppose to do this for you if i understand correctly. pre-turing cards like mine may need some more intervention.

1 Like

can somebody share how /usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf looks like after making this work? Specially in an intel + nvidia configuration

i guess with this setup, the best possible outcome is that the nvidia gpu can go into a low power state but never powered off completely. :disappointed:
https://devtalk.nvidia.com/default/topic/1061158/linux/render-offload-power-management-on-435-17-linux-drivers/post/5373881/?offset=3#5373936

heres where i see this as not so great.

  • only turing cards and newer will have built in power management, the documentation says as much so no surprise there.
  • the xorg process runs on the nvidia card and the modesetting driver does all the rendering unless an app is launched with the variable to specify using the nvidia card. regardless, there is no way to power off the gpu without exiting xorg unloading modules and starting xorg exclusively on the intel gpu.
  • modesetting is good for carrying the art, not creating it and it shows quite a bit. :nauseated_face:
  • with PRIME (as in the old way) the nvidia did all the rendering and modesetting/intel only displayed it so it wasnt relying on the crap iGPU to do any of the work and make a mess of it.
  • having the gpu powered on all the time regardless of what power state it's in will use a fair amount of power all the while the modesetting is doing most of the work. so for everyday tasks that are not graphics intensive your paying the extra cost of battery life while getting the graphical performance of an etch-a-sketch. apps that are run on the nvidia though do seem to work perfectly fine.

it's early days though, i guess i'll focus more on getting the nvidia card to use as little power as possible rather than powering it of, at least on this install anyway. i'll be sticking to prime/optimus-switch on my primary for now.

2 Likes

Forum kindly sponsored by Bytemark