Call for testing: optimus-switch

  • ok, so over the past few days ive been working on an alternative to bumblebee or optimus-manager for those of us who have optimus enabled laptops.

  • my initial goal was to just write a small bash script that would let me easily jump back and forth from using PRIME that i have set up thanks to @jonathon 's tutorial, and also an intel only setup that completely powers down the nvidia GPU in a way that does not cause lockups, break sleep/suspend cycles, and allows me to get the very best performance when i want it and the very best battery life when it's needed (not often in my case, but whatever, why not?) .

  • yes, im aware optimus-manager does this already (i think, is gpu low power? or no power? when using intel?) but i was never able to get it working properly, scaling would be off, lockups between switching, among other things. bumblebee is installed by default as non-free drivers and works well for most people, but in my case among other's not so much.

  • i did not make this because other options were not good enough, i made it because i felt like it at the time and was a great learning experience. this is now how i currently have 3 separate installs set up (xfce/openbox, kde, gnome) and they are all working great.

after setup all thats needed to switch default modes is:

  • sudo (for intel/nvidia PRIME)

  • sudo (for intel only mode with nvidia gpu disabled, removed from sight)

  • When in intel/nvidia (prime) mode it is setup the same way @jonathon 's PRIME tutorial is and allows for the best possible performance using an optimus laptop running linux AFAIK anyway.

  • When in intel only mode it works as it should when using a non optimus laptop saving a decent amount of power and extending battery life for the times your away from AC power. the nvidia gpu is disabled/powered-down and removed from sight 8 seconds after reaching and is implemented using a systemd service unit in combination with a bash script to make this happen as root as its required to be. LightDM and SDDM do not require a service to do this, the display setup script is capable of doing this without one. GDM however does need the service AFAIK to run a script as root , since .desktop files would not allow for it. additionally, the intel driver is used by default, but i also include the configuration file to use the modesetting drivers instead if you so choose and requires only editing one small line to do so, this can be done at any time it does not need to be done before installing but you can if you'd like..

please follow the linked github pages for instructions, they will be more up to date and accurate.

@linesma was my first tester besides myself, it's setup and running well, switching works without a hitch every time.

  • these commands do not need to be run before each reboot, only when you want to change the default boot mode. yes, to change modes requires a reboot but if your using your laptop for gaming most of the time anyway your gonna be using prime for best performance.

  • and if your needs require travel, or anything that require's the very best of battery life, then you'll likely be using intel-only mode most of the time. when this mode is set the nvidia dGPU is completely powered down and removed from sight. mhwd, inxi, lspci wont even see it.

  • the whole point of this setup is so that the user can quickly and easily switch modes based on personal needs as the user see's fit. im hoping some other's may feel this is a tool they are in need of and possibly provide some actual comparative data between power usage on each mode. ive seen many opinions on difference in battery life but not much real data so that would be great.

Thanks for reading, the github repo links are below. please feel free to ask any questions or make suggestions. again, Thanks. @dglt

for GDM:

for LightDM:

for SDDM:

Update: LightDM install script is finished and ready to go, instructions are on the linked github repo. please do let me know how if works out. thanks.

update: SDDM, and GDM installers are now ready. all three optimus-switch variants are ready to go, have updated install scripts and updated instructions on github pages.

and special thanks to @vetzki , @petsam for their input on a few things i would have struggled to fully understand, @jonathon for a great, easy to follow tutorial, @tbg for his help/input in the past on writing systemd service units. thanks.

Optimus-manager alternative
How To Install NVIDIA Drivers
Manjaro's graphics switching features
Desktop(Gnome) frequently get frozen few seconds and works again.
CPU soft lock - fresh install
Problem running `nvidia-xrun`. Unable to load driver.
External display issues Nvidia
optimus-manager not working
Options for Nvidia Optimus graphics
Bumblebee is not able to activate NVIDIA GPU
Shutdown problems, desktop freezing
Laptop built-in display not working with nvidia driver
Can't run any games with proton on steam-manjaro. Xlib: extension "NV-GLX" missing on display ":0".
Thinkpad T430s Grafiktreiber
Optimus-Powered Laptop Prime Tearing with nvidia_drm
[Dell XPS 15] Enable Nvidia drivers
igfx issues i9 9900k uhd 630 3e98
Unable to boot after nvidia-prime-select install
Can't reinstall Manjaro
Screen tearing appears after watching movies on VLC or using Simplescreenrecorder | SOLVED
configuration of my graphic card is correct?
User experience degraded when laptop is on battery
Restore nvidia settings
Problemas para Suspender e Hibernar
No gaming for me
I3 Nvidia screen tear | SOLVED
Stable but battery efficient GPU option
Can not boot Manjaro
Low screen resolution after installing nvidia driver
Upgrade to kernel 5.0 broke display
Nvidia "Prime Profiles" Not Show
Using something else than FOSS gfx driver in macBook Pro
Installing video-nvidia using mhwd always leads to Failed to start load kernel modules
Deepin features proposal
Nvidia M150 Issues On Xiaomi Mi Notebook Pro
Blackscreen on boot after removing bumblebee from systemctl
Manjaro has ceased working on my new laptop
I don't think my system uses Nvidia at all (when using Bumblebee)
Setting up nvidia optimus; nvidia not detected
[Steam] Recently Civ VI stops to launch
Laptop not booting after installing proprietary drivers
Bbswitch Installation on Manjaro
Unreasonably bad performance on RTX 2060 (mobile)
Manjaro Setting Manager - Hardware Configuration shows no installed display controller
Steam Failed to load on bumblebee
How can I activate my Nvidia card ?
Several issues since May 26th update
Bumblebee not working with Steam
Actual noob when it comes to gaming on Linux... a few questions.
Vulkan issues (?) with Witcher 3 (GOG) + wine + dxvk + Optimus Nvidia
accidentally (?) deleted graphics driver.. blank screen..arrghh (wants walkthrough for driver reinstallation)

This is a #manjaro-development thread. Please keep discussion limited to the topic. I will be culling off-topic posts with great prejudice.

1 Like

This script works as advertised. I have had no problems switching between the two modes using it. To test the stability and reliability of the switch, I have changed graphics modes a number of times. Each time that I have executed the change, the graphics card in use was stable.

System Information:

inxi -Fxxz
  Host: FX504D-Manjaro Kernel: 4.19.20-1-MANJARO x86_64 bits: 64 
  compiler: gcc v: 8.2.1 Desktop: Gnome 3.30.2 wm: gnome-shell dm: GDM 
  Distro: Manjaro Linux 
  Type: Laptop System: ASUSTeK product: TUF GAMING FX504GD_FX80GD v: 1.0 
  serial: <filter> 
  Mobo: ASUSTeK model: FX504GD v: 1.0 serial: <filter> 
  UEFI: American Megatrends v: FX504GD.312 date: 07/13/2018 
  ID-1: BAT1 charge: 45.1 Wh condition: 45.1/48.1 Wh (94%) volts: 4.0/11.7 
  model: ASUS A32-K55 serial: <filter> status: Full 
  Topology: 6-Core model: Intel Core i7-8750H bits: 64 type: MT MCP 
  arch: Kaby Lake rev: A L2 cache: 9216 KiB 
  flags: lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 53004 
  Speed: 900 MHz min/max: 800/4100 MHz Core speeds (MHz): 1: 899 2: 900 
  3: 900 4: 900 5: 900 6: 901 7: 901 8: 900 9: 900 10: 900 11: 901 12: 901 
  Device-1: Intel UHD Graphics 630 vendor: ASUSTeK driver: i915 v: kernel 
  bus ID: 00:02.0 chip ID: 8086:3e9b 
  Display: x11 server: X.Org 1.20.3 driver: modesetting,nvidia 
  compositor: gnome-shell resolution: 1920x1080~60Hz 
  OpenGL: renderer: Mesa DRI Intel UHD Graphics 630 (Coffeelake 3x8 GT2) 
  v: 4.5 Mesa 18.3.2 compat-v: 3.0 direct render: Yes 
  Device-1: Intel Cannon Lake PCH cAVS vendor: ASUSTeK driver: snd_hda_intel 
  v: kernel bus ID: 00:1f.3 chip ID: 8086:a348 
  Sound Server: ALSA v: k4.19.20-1-MANJARO 
  Device-1: Intel Wireless-AC 9560 [Jefferson Peak] driver: iwlwifi 
  v: kernel port: 5000 bus ID: 00:14.3 chip ID: 8086:a370 
  IF: wlo1 state: up mac: <filter> 
  Device-2: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet 
  vendor: ASUSTeK driver: r8169 v: kernel port: 3000 bus ID: 03:00.0 
  chip ID: 10ec:8168 
  IF: enp3s0 state: down mac: <filter> 
  Local Storage: total: 1.14 TiB used: 338.09 GiB (29.0%) 
  ID-1: /dev/nvme0n1 vendor: Samsung model: SSD 970 EVO 250GB 
  size: 232.89 GiB speed: 31.6 Gb/s lanes: 4 serial: <filter> 
  ID-2: /dev/sda vendor: HGST (Hitachi) model: HTS541010B7E610 
  size: 931.51 GiB speed: 6.0 Gb/s serial: <filter> 
  ID-1: / size: 227.74 GiB used: 27.05 GiB (11.9%) fs: ext4 
  dev: /dev/nvme0n1p2 
  System Temperatures: cpu: 57.0 C mobo: 27.8 C 
  Fan Speeds (RPM): cpu: 0 
  Processes: 311 Uptime: 16m Memory: 15.52 GiB used: 1.42 GiB (9.1%) 
  Init: systemd v: 239 Compilers: gcc: 8.2.1 Shell: bash v: 5.0.0 
  running in: gnome-terminal inxi: 3.0.30
inxi -Gxxxz for Intel Mode
  Device-1: Intel UHD Graphics 630 vendor: ASUSTeK driver: i915 v: kernel 
  bus ID: 00:02.0 chip ID: 8086:3e9b 
  Display: x11 server: X.Org 1.20.3 driver: modesetting,nvidia 
  compositor: gnome-shell v: 3.30.2 resolution: 1920x1080~60Hz 
  OpenGL: renderer: Mesa DRI Intel UHD Graphics 630 (Coffeelake 3x8 GT2) 
  v: 4.5 Mesa 18.3.2 compat-v: 3.0 direct render: Yes
inxi -Gxxxz for nVidia Mode
  Device-1: Intel UHD Graphics 630 vendor: ASUSTeK driver: i915 v: kernel 
  bus ID: 00:02.0 chip ID: 8086:3e9b 
  Device-2: NVIDIA GP107M [GeForce GTX 1050 Mobile] vendor: ASUSTeK 
  driver: nvidia v: 415.27 bus ID: 01:00.0 chip ID: 10de:1c8d 
  Display: x11 server: X.Org 1.20.3 driver: modesetting,nvidia 
  compositor: gnome-shell v: 3.30.2 resolution: 1920x1080~60Hz 
  OpenGL: renderer: GeForce GTX 1050/PCIe/SSE2 v: 4.6.0 NVIDIA 415.27 
  direct render: Yes

To test how the script worked and if it would reliably switch modes. I ran some tests. While I know that they are not scientific in nature, and are hardware dependent, I tried to do tests that would give a broad spectrum of use cases.

  1. Played back a 1080p video in VLC, Kodi, and MPV. The source video was Mpeg-2 and was later encoded with the following codecs: Mpeg-4, x264, and x265 (HEVC) using handbrake. I did not do a DVIX/XVID encode because this codec is no longer in general use. Playback of each encoded file was done multiple times using the above media software using both the nVidia and Intel only modes. Playback was smooth in both modes. As expected, there was some slight screen tearing in VLC with video encoded with the x265 codec in both modes. Kodi and MPV played back all the videos without any issues.

  2. Ran the Rise of the Tomb Raider in game benchmark, at 1080p using the low preset, 4 times in each mode. I alternated modes for each test. ie. nVidia-Intel-nVidia-Intel-etc… Since I was going for stability and not frames per second (fps), I left vsync turned on. This limited the fps to my laptop’s screen refresh rate of 60hz or 60 fps. In Intel mode we averaged 20 fps with dips into the single digits. In nVidia mode the average was a stable 60fps. For completeness sake, I then ran the same benchmarks again, this time at 1080p using the high preset. As expected, in Intel mode, I was averaging 4 fps, and in nVidia mode, I was getting 60fps but with dips into the low 50’s and upper 40’s.

  3. Operated the laptop on battery for an hour in each mode making sure that the battery was fully charged between each mode switch. While I know this test is subjective, and I did not drain the battery fully, I was mainly testing that the nVidia card was truly unpowered when running in Intel mode. I did my best to ensure that I did the same tasks in both modes. After an hour of usage in nVidia mode, my battery was at about 40%. In Intel mode after an hour, the battery was at 60%. This tells me that the nVidia card is truly turned off when running Intel mode, and confirms the results of inxi -Gxxxz.

Overall, I have to say that I am pleased with how this script works. The ability to reliably switch back and forth between each graphics options as needed is awesome.

Thank you @dglt for letting me help with testing on Gnome. I really enjoyed it and learned even more about Linux in the process.


thanks for taking the time to test all of that, im glad your happy with it.


How did you apply vsync?

(did you use nvidia-drm.modeset = 1 module option or something different?)

Yes, I used the nvidia-drm.modeset=1 option. It is set automatically by the optimus-switch script for nVidia. nVidia-modprobe.conf sets this value to one. To my limited knowledge, with a Prime setup, that is the only way enable vSync on a driver level. I also had the vSync option enabled in the graphics settings of Rise of the Tomb Raider .

On a side note. I find that I leave the laptop mostly in Intel mode. For my daily usage needs, the Intel mode works great. When I need some more horse power, I move over to the nVidia mode.

1 Like

you could try if it helps (for the x265 vidoe) if you try without vsync (probably wont). You can disable vsync temporarily for one applicaiton if you use “application-profile” in nvidia-settings (or try one of the other settings)

1 Like

yes, nvidia-drm.modeset=1 is enabled along with various blacklists and other configurations needed for nvidia to function properly.
when is run, the intel configuration is removed and the needed nvidia configurations (xorg.conf.d, mhwd.d, modprobe.d, modules-load.d) get copied to the needed directories. for example:

cp /etc/switch/nvidia/nvidia-modprobe.conf /etc/modprobe.d/99-nvidia.conf`

when using nvidia, all the .conf files get copied/renamed to be 99-nvidia.conf, on intel they will all be 99-intel.conf.
so if you have any existing custom configurations for intel or nvidia you would rather use instead of the ones provided then you would just.

cp /path/to/custom/xorg.conf.d/file.conf  /etc/switch/nvidia/nvidia-xorg.conf

and whenever is run, that will be the configuration used, or you could just edit the
included .conf files located in these directories to do the same thing. and they will be used whenever switch is made.


note: gdm and lightdm installers are ready to go, intructions are on the linked git pages. i should have the sddm installer ready in an hour or two after i test it.

in addition, in the past i noticed that disabling the nvidia gpu with acpi_call or bbswitch would sometimes cause lockups when trying to sleep/suspend/reboot/shutdown. i found that if the nvidia gpu’s “mode” is switched from “on” to “auto” right before disabling it prevents any of this behavior and so i included it in the script to diable nvidia gpu, so this is done for you.

by “auto” i dont mean the powermizer mode in nvidia-settings, im referring to the power management setting in sysfs

1 Like

Unfortunately, turning off vSync will actually make the tearing worse. I am not sure about video, but with gaming, in a nontechnical explanation, screen tearing is caused by the graphics card putting out more frames per second (fps) than the monitor’s refresh rate can handle. vSync essentially limits the GPU to output the same number of fps as the monitors refresh rate. Examples:

60hz monitor = 60 fps
120hz monitor = 120fps
144hz monitor = 144fps

Your fps can go below your monitor’s refresh rate without issues, as long as the frame timing remains consistent. Discussing frame timing is not a simple topic and outside of the scope of your question. Here is a good article that goes into extreme detail if you are interested, link

The problem with VLC and x265 video is the rendering engine used by VLC. It has issues rendering the video properly depending on the profile that was used to encode the video. It can be mitigated somewhat by changing the “Hardware-Accelerated Decoding” option from “automatic” to VDPAU if you are using an nVidia card. It is something they have been working on now for a while. Support for h265 in VLC has gotten better, but there are still some improvements that can be made. For the most part, I use MPV for random video playback and Kodi for playing back videos that I want to keep in my library.


update: SDDM installer is now ready. all three optimus-switch variants are ready to go, have updated install scripts and updated instructions on github pages. (links are in original post)

there was an issue with the that has been fixed. when the script was run with sudo as it is required to be, the $USER variable was producing “root” instead of user name and when the script tried to copy the contents over to /etc/switch/ it was trying to copy the files like this.
cp /home/$USER/optimus-switch/* /etc/

/home/root/optimus-switch/* <–directory does not exist…

which led to “cannot stat” errors. anyhow, it is now fixed and all instructions updated.


I am currently setup to run Prime but want to try and test this if for no other reason than to give feedback for all the help I got. I have Nvidia installed. Do I still need to do the following?: * sudo pacman -S linux-headers acpi_call-dkms xf86-video-intel git

  • sudo modprobe acpi_call

if you are already using bumblebee or prime setup following jonathons prime tutorial, the install script will automatically remove those configuration files and replace them with essentially the same ones but with the needed filenames for each script to work properly.

if your already using prime, you can skip the installing of video-nvidia, but if you dont have any of those packages listed to be installed you will need them.

  • acpi_call-dkms (or acpi_call package for each of your installed kernels, dkms is just easier)

  • xf86-video-intel is needed for the intel drivers used in intel only mode. alternatively you can set it to use the modesetting driver instead, instructions on git page. in my case i found modesetting to cause tearing ddepending on which desktop i was using and the intel drivers to work on all of them so i made intel drivers default but included configuration to use modesetting also. it requires editing one line in /usr/local/bin/ and change from intel-xorg.conf to modeset-xorg.conf and it will use modeset the next time `sudo is run.

  • linux-headers are needed for many things and alot of people already have them installed previously for other reasons, in this case needed for dkms.

  • git for git clone .

sudo modprobe acpi_call may be unnecessary but im not sure, it may be loaded by default during install, but again im not sure so better to be sure and modprobe it as instructed.

thank you, i appreciate your willingness to help test this, i think you’ll find it works really well. until a few days ago when i made this, i used only prime because it just works. at the same time why not have a quick easy way to switch to intel only and get battery saving benefits of your nvidia gpu being disabled when you want to prioritize batter life when not using AC.


what makes more sense?

what it does now during install is include an nvidia configuration that gets put in /etc/X11/mhwd.d/ when is run?

having nvidia-xconfig run during install and set the output to the included nvidia config like this?
sudo nvidia-xconfig --xconfig=/etc/switch/nvidia/nvidia-mhwd.conf
that way its configured to match each user?

instruct user to do this when changes are needed to xconfig? the directory mentioned is where the nvidia mode configurations are stored and transfered from when switching.

sudo nvidia-xconfig <desired options> --xconfig=/etc/switch/nvidia/nvidia-mhwd.conf

leave it the way it is and instruct user to replace/edit the configurations stored in /etc/switch/nvidia/* with desired changes so they will be properly applied at every switch? (i already instruct this in the readme instructions and also ##commented out in certain .conf )

your opinions would be appreciated, thanks.

Yes. Each user/hw needs a native conf, created live.
As I explained, with your current instructions/scripts, the (supposed) nvidia config (nvidia-mhwd.conf) is not active, because it is NOT seen by Xorg.
To be properly read/parsed, it needs, either move it in /etc/X11/xorg.conf.d/ (the simplest solution), or symlink it in /etc/X11/xorg.conf.d/ (like the way mhwd does).

it does, when switched to nvidia prime it places needed files in the necessary directories. that all works fine. i was asking if i should let nvidia-xconfig create the file to be used or instruct the user to make his/her own and overwrite the /etc/switch/nvidia/nvidia-mhwd.conf because thats where changes need to be made for them to be persistent. if you had a custom xorg.conf you wanted to use instead of the default you would just replace the nvidia-mhwd.conf file i included by default and then it would be set each time you switched to prime mode.

the other reason im unsure is because using nvidia-xconfig has caused me issues in the past with my prime setup.

cp /etc/switch/nvidia/nvidia-mhwd.conf /etc/X11/mhwd.d/99-nvidia.conf

This is not the proper directory.

If used blindly it writes the conf file to /etc/X11/xorg.conf (which is the last “user by-pass” folder that Xorg reads). It does not precede the xorg.conf.d/*.conf confs, but it might interfere because it includes settings not present in those files.
The proposed action is to just “generate” a proper basic (starting-point) conf file and save it where you instruct it to. The user may add/edit this to match HW/SW needs. But that file must go in the proper folder (see previous paragraphs).
And this is


1 Like

ok, i think i get what your saying. including an nvidia config for both /xorg.conf.d and /mhwd.d is not necessary. only the xorg.conf.d is needed and any additional options/setting made by the user could be made to /mhwd.d if need be?
i wondered about this when i was making optimus-switch because iirc there is no mention of needing to create a new .conf in /mhwd.d in the prime tutorial.

1 Like

This is great, thank you for your contribution. I’ll continue to test it and report back if I encounter any issues.

1 Like

please let me know either way if there are issues, or no issues so i know how it goes. thanks for testing.

I hope your solution uses systemd :joy:

Now seriously… My laptop is setup with prime. I can be your lab rat if you want me to test your solution @dglt :+1: