Enable coolbits on a GTX1650 causes network card to crash and bot boot

Following NVIDIA/Tips and tricks - ArchWiki,

It created an xorg.conf file in /etc/X11. After a reboot, boot hangs and it shows the network card crashing for some reason and starts dumping the last line repeatedly until I force reboot:

rtw_8822ce 0000:04:00.0: sta 04:33:89:19:2c:8c with macid 0 left
[ 5162.453798] ieee80211 phy0: Hardware restart was requested
[ 5162.669555] rtw_8822ce 0000:04:00.0: start vif d8:c0:a6:58:69:1b on port 0
[ 5162.886611] rtw_8822ce 0000:04:00.0: sta 04:33:89:19:2c:8c joined with macid 0
[ 5162.889840] rtw_8822ce 0000:04:00.0: failed to send h2c command

After some research, I read that Manjaro uses /etc/X11/mhwd.d//nvidia.conf instead. I tried copy pasting the content of xorg.conf to nvidia.conf but I end up getting the same error.

Here are some specs that might be relevant.

System:
Host: PZ0-ASUS-TUF-FX505DT-HN657T Kernel: 5.13.5-1-MANJARO x86_64 bits: 64
compiler: gcc v: 11.1.0
parameters: BOOT_IMAGE=/boot/vmlinuz-5.13-x86_64 root=/dev/nvme0n1p1 rw
udev.log_priority=3
Desktop: Xfce 4.16.0 tk: Gtk 3.24.29 info: xfce4-panel wm: xfwm 4.16.1
vt: 7 dm: LightDM 1.30.0 Distro: Manjaro Linux base: Arch Linux

CPU:
Info: Quad Core model: AMD Ryzen 5 3550H with Radeon Vega Mobile Gfx
bits: 64 type: MT MCP arch: Zen family: 17 (23) model-id: 18 (24)
stepping: 1 microcode: 8108102 cache: L2: 2 MiB
flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3 svm
bogomips: 33550
Speed: 2635 MHz min/max: 1400/2100 MHz boost: enabled Core speeds (MHz):
1: 2635 2: 2586 3: 1534 4: 1570 5: 2171 6: 2485 7: 1948 8: 1299
Vulnerabilities: Type: itlb_multihit status: Not affected
Type: l1tf status: Not affected
Type: mds status: Not affected
Type: meltdown status: Not affected
Type: spec_store_bypass
mitigation: Speculative Store Bypass disabled via prctl and seccomp
Type: spectre_v1
mitigation: usercopy/swapgs barriers and __user pointer sanitization
Type: spectre_v2 mitigation: Full AMD retpoline, IBPB: conditional, STIBP:
disabled, RSB filling
Type: srbds status: Not affected
Type: tsx_async_abort status: Not affected

raphics:
Device-1: NVIDIA TU117M [GeForce GTX 1650 Mobile / Max-Q] vendor: ASUSTeK
driver: nvidia v: 470.57.02 alternate: nouveau,nvidia_drm bus-ID: 01:00.0
chip-ID: 10de:1f91 class-ID: 0300
Device-2: AMD Picasso vendor: ASUSTeK driver: amdgpu v: kernel
bus-ID: 05:00.0 chip-ID: 1002:15d8 class-ID: 0300
Device-3: IMC Networks USB2.0 HD UVC WebCam type: USB driver: uvcvideo
bus-ID: 3-1:2 chip-ID: 13d3:56a2 class-ID: 0e02 serial: 0x0001
Display: x11 server: X.Org 1.20.11 compositor: xfwm4 v: 4.16.1 driver:
loaded: amdgpu,ati,nvidia unloaded: modesetting,nouveau
alternate: fbdev,nv,vesa display-ID: :0.0 screens: 1
Screen-1: 0 s-res: 1920x1080 s-dpi: 96 s-size: 508x285mm (20.0x11.2")
s-diag: 582mm (22.9")
Monitor-1: eDP res: 1920x1080 hz: 144 dpi: 142 size: 344x193mm (13.5x7.6")
diag: 394mm (15.5")
OpenGL: renderer: AMD Radeon Vega 8 Graphics (RAVEN DRM 3.41.0
5.13.5-1-MANJARO LLVM 12.0.1)
v: 4.6 Mesa 21.1.5 direct render: Yes

Not sure if I am doing something wrong, missing a step or accidentally uncovered a bug.

Since is a hybrid GPU AMD and Nvidia, only prime, optimus-switch (nvidia mode), and optimus-manager (nvidia mode) are capable of using coolbits. On render offload will not work.

Geez. I didn’t know that. Are there alternatives to overclocking the GPU?

Even for GreenWithEnvy aka gwe package, you will still need coolbits to be enabled, and not sure how that works on hybrid GPU, but if you use optimus-switch or optimus-manager it should work, and have some custom profiles.

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.