Nvidia render offloading

this is my current setup when using prime
https://github.com/dglt1/optimus-switch/blob/master/switch/nvidia/nvidia-xorg.conf

with prime, you dont set the accell method because the nvidia gpu is doing all the work and using the intel gpu only to display it. whereas with render offload, compositing is done via the intel using the modesetting driver.

both options use both nvidia and modesetting drivers, just in different ways. heres some comparison outputs from both prime and render offload on my laptop

PRIME setup: https://pastebin.com/e4SU9A62

Render Offload: https://pastebin.com/ECs23UjM

but to answer your question, no changes are needed to continue using prime.

1 Like

Very good, I appreciate the comparison outputs also.

Hmm, with this config, my external monitor (connected over DisplayPort over USB-C) doesn't work, which isn't surprising if I'm on the Intel card. Wondering if I can tweak something to make it work with this config. I don't know much about X.org config and am not sure what to learn.

...oh. Seems that the env vars don't even work, so perhaps I don't have the correct packages (sorry for the fish):

 ~  glxinfo | grep vendor                                                                                                                          14:39:28
server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
OpenGL vendor string: Intel Open Source Technology Center
 ~  env __NV_PRIME_RENDER_OFFLOAD=1 glxinfo | grep vendor                                                                                          14:39:39
server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
OpenGL vendor string: Intel Open Source Technology Center
 ~  env __NV_PRIME_RENDER_OFFLOAD=1 glxinfo | grep vendor                                                                                          14:40:16
server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
OpenGL vendor string: Intel Open Source Technology Center
 ~  env __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo | grep vendor                                                         14:40:19
X Error of failed request:  BadValue (integer parameter out of range for operation)
  Major opcode of failed request:  152 (GLX)
  Minor opcode of failed request:  24 (X_GLXCreateNewContext)
  Value in failed request:  0x0
  Serial number of failed request:  39
  Current serial number in output stream:  40

I'm on the testing branch which had seemed to get nvidia-435.17-2 and xorg-server-1.20.5-2, but perhaps I have to wait a bit longer. I will switch back to optimus-manager for now and see what they wind up implementing (or if a new mhwd-nvidia config comes along that does this for me). I'm willing to test something else if it would help, though. At least I can still get into KDE, even if on the wrong screen...

it's likely due to a conflicting configuration, or conflicting with optimus-manager.

you need to disable optimus-manager and run it's cleanup script.

systemctl disable optimus-manager --now
optimus-manager --cleanup

and uninstall it. and uninstall bumblebee if it's installed as well via mhwd. for this setup to work you should have only video-nvidia installed.

if any of the configurations are left behind in these directories, they will conflict with the render offload configurations
/etc/X11/xorg.conf.d
/etc/modprobe.d/
/etc/modules-load.d/

Much appreciation for the fast reply. I had not run optimus-manager's cleanup (I knew about it, but I was following the OP strictly). Will do that, and I will switch to video-nvidia, then try again.

bumblebeed is already disabled, so that shouldn't be interfering in any case, but I will get it uninstalled explicitly.

1 Like

you should still get rid of it, even with it disabled the bbswitch packages remain installed and they can be problematic. uninstall bumblebee and install video-nvidia is a good idea

Yep, I removed the MHWD config with mhwd -r pci <long bumblebee package name> and saw it get removed. Then I installed video-nvidia. I'm getting closer:

 ~  env __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo | grep vendor                 15:06:24
server glx vendor string: NVIDIA Corporation
client glx vendor string: NVIDIA Corporation
OpenGL vendor string: NVIDIA Corporation
 ~  glxinfo | grep vendor                                                                          128ms  15:06:30
server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
OpenGL vendor string: Intel Open Source Technology Center

I'm still not sure how to tell it to use the dGPU to power my external monitor. It seems I have offloading configured, but the moniitor's still black.

(As an aside, looks like Turing cards really can be turned off, if I am reading this output correctly:

 ~  nvidia-smi                                                                                     125ms  15:08:43
Sun Aug 18 15:08:44 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 435.17       Driver Version: 435.17       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce RTX 208...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   48C    P8     3W /  N/A |     17MiB /  7982MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1006      G   /usr/lib/Xorg                                 15MiB |
+-----------------------------------------------------------------------------+

)

Uninstall everything all mhwd configs first. Via mhwd, of course. Then install only linux52-nvidia (where 52 is the version of your current kernel). You might also want to install xf86-video-intel. Also make sure to disable nouveau cuz it often interferes reboot process. It can be done with complete disable like adding modprobe.blacklist=nouveau to bootloader's kernel options, or nouveau.modeset=0 to disable modesetting of nouveau driver, or nouveau.noaccel=1 (the most non-intrusive way). Check your /etc/mkinitcpio.conf MODULES section for the presence ofi915. Regenerate initramfs withsudo mkinitcpio -p linux52` (where 52 is your linux version).
Now you are ready to copypaste OP's xorg conf and reboot.

ATTENTION! CRITICAL ERROR! YOUR PC IS GOING TO BURST!
I'm kidding just REMOVE IT.

no, dont remove it. and mhwd does all that blacklisting for you. besides he has it working

OK, did this.

Added i915 to MODULES section in /etc/mkinitcpio.conf.

Added modprobe.blacklist=nouveau to GRUB_CMDLINE_LINUX_DEFAULT as written on the Arch wiki re. how to set kernel parameters. Regenerated initramfs (it built fine).

Attempting reboot now.

Yeah that partially was a joke but. Since we're early adopters we should first set this thing properly without pre-configured Manjaro settings, which may be the culprit if something goes wrong.

your reading it correctly, it's just not providing the right info. mine also displays 0% when sitting still and it's a 960m

looks like render offload is working like it should be. if you cant get the external monitor working you should start a separate thread about it so this one is not hijacked anymore than it already is (im guilty of this as well)

@openminded OK, gonna break this out into a separate thread then.

1 Like

add not being able to use "CoolBits" whatsoever, to the list with render offload. tried, failed, then read this
https://devtalk.nvidia.com/default/topic/1061165/cannot-enable-coolbits-with-prime-render-offload/

Hi,I followed this instruction and successful to use nvidia render offloading,and this guide as pm:
http://download.nvidia.com/XFree86/Linux-x86_64/435.17/README/dynamicpowermanagement.html
I'm using turing and coffelake,but the nvidia seems still running on with out any auto turn off.
And I also could't light my screen or stuck in login sometimes after I close and reopen my laptop,I had to force shutdown,have anyone had same issue like me?

thats because this render offload "feature" does not allow you to power off the nvidia gpu since the xorg process itself is run on the nvidia. check nvidia-smi and you'll see the xorg process. turing gpu's have a better power saving ability which is why this render offload setup is meant to be used with turing gpu's. if you want to be able to power down the card when you dont need it you can use optimus-switch or optimus-manager. or even fumblebee if you can get it to work but i dont recommend it.

according to this answer,the PRIME Render Offload shouldn't working right.Is xorg-server-git support this nvidia feature?

render offload does work, the needed drivers/patches have been setup on manjaro for a few weeks now.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.

Forum kindly sponsored by