How to remove the 80W power cap on the RTX 3060 laptop GPU

I use mangohud to monitor my internals and I have watched them for several days since I got this new machine, in all sorts of different scenarios.

Now I know that on this machine, the RTX3060 is rated for a maximum of 130W, but it seems to be capped at 80W. No matter the power mode I use, it always stays around 79-80W.

Any idea how I can get it to run at it’s full potential?

System:
  Kernel: 5.19.8-arch1-1 arch: x86_64 bits: 64 compiler: gcc v: 12.2.0
    parameters: BOOT_IMAGE=/vmlinuz-linux
    root=UUID=881825ad-972b-491a-b2ce-6d2722e5e85e rw rootfstype=ext4
    loglevel=3 ibt=off
  Desktop: GNOME v: 42.4 tk: GTK v: 3.24.34 wm: gnome-shell dm: GDM v: 42.0
    Distro: Arch Linux
Machine:
  Type: Laptop System: LENOVO product: 82JM v: Legion 5 17ITH6H
    serial: <superuser required> Chassis: type: 10 v: Legion 5 17ITH6H
    serial: <superuser required>
  Mobo: LENOVO model: LNVNB161216 v: NO DPK serial: <superuser required>
    UEFI: LENOVO v: H1CN49WW date: 08/16/2022
Battery:
  ID-1: BAT0 charge: 83.5 Wh (100.0%) condition: 83.5/80.0 Wh (104.4%)
    volts: 17.5 min: 15.4 model: Celxpert L20C4PC2 type: Li-poly
    serial: <filter> status: full
CPU:
  Info: model: 11th Gen Intel Core i7-11800H bits: 64 type: MT MCP
    arch: Tiger Lake gen: core 11 level: v4 built: 2020 process: Intel 10nm
    family: 6 model-id: 0x8D (141) stepping: 1 microcode: 0x40
  Topology: cpus: 1x cores: 8 tpc: 2 threads: 16 smt: enabled cache:
    L1: 640 KiB desc: d-8x48 KiB; i-8x32 KiB L2: 10 MiB desc: 8x1.2 MiB
    L3: 24 MiB desc: 1x24 MiB
  Speed (MHz): avg: 2345 high: 3696 min/max: 800/4600 scaling:
    driver: intel_pstate governor: performance cores: 1: 2300 2: 2300 3: 2300
    4: 2300 5: 1639 6: 2300 7: 2300 8: 2300 9: 2300 10: 2300 11: 2300
    12: 2300 13: 2300 14: 2300 15: 3696 16: 2300 bogomips: 73744
  Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
  Vulnerabilities:
  Type: itlb_multihit status: Not affected
  Type: l1tf status: Not affected
  Type: mds status: Not affected
  Type: meltdown status: Not affected
  Type: mmio_stale_data status: Not affected
  Type: retbleed status: Not affected
  Type: spec_store_bypass mitigation: Speculative Store Bypass disabled via
    prctl
  Type: spectre_v1 mitigation: usercopy/swapgs barriers and __user pointer
    sanitization
  Type: spectre_v2 mitigation: Enhanced IBRS, IBPB: conditional, RSB
    filling, PBRSB-eIBRS: SW sequence
  Type: srbds status: Not affected
  Type: tsx_async_abort status: Not affected
Graphics:
  Device-1: NVIDIA GA106M [GeForce RTX 3060 Mobile / Max-Q] vendor: Lenovo
    driver: nvidia v: 515.65.01 non-free: 515.xx+ status: current (as of
    2022-08) arch: Ampere code: GAxxx process: TSMC n7 (7nm) built: 2020-22
    pcie: gen: 1 speed: 2.5 GT/s lanes: 16 link-max: gen: 4 speed: 16 GT/s
    bus-ID: 01:00.0 chip-ID: 10de:2560 class-ID: 0300
  Device-2: Syntek Integrated Camera type: USB driver: uvcvideo
    bus-ID: 3-6:2 chip-ID: 174f:2459 class-ID: fe01 serial: <filter>
  Display: x11 server: X.org v: 1.21.1.4 with: Xwayland v: 22.1.3
    compositor: gnome-shell driver: X: loaded: nvidia unloaded: modesetting
    alternate: fbdev,nouveau,nv,vesa gpu: nvidia display-ID: :1 screens: 1
  Screen-1: 0 s-res: 1920x1080 s-size: <missing: xdpyinfo>
  Monitor-1: DP-4 res: 1920x1080 hz: 144 dpi: 128
    size: 382x215mm (15.04x8.46") diag: 438mm (17.26") modes: N/A
  Message: Unable to show GL data. Required tool glxinfo missing.
Audio:
  Device-1: Intel Tiger Lake-H HD Audio vendor: Lenovo driver: snd_hda_intel
    v: kernel bus-ID: 00:1f.3 chip-ID: 8086:43c8 class-ID: 0403
  Device-2: NVIDIA GA106 High Definition Audio driver: snd_hda_intel
    v: kernel pcie: gen: 1 speed: 2.5 GT/s lanes: 16 link-max: gen: 4
    speed: 16 GT/s bus-ID: 01:00.1 chip-ID: 10de:228e class-ID: 0403
  Sound Server-1: ALSA v: k5.19.8-arch1-1 running: yes
  Sound Server-2: JACK v: 1.9.21 running: no
  Sound Server-3: PulseAudio v: 16.1 running: yes
  Sound Server-4: PipeWire v: 0.3.58 running: yes
Network:
  Device-1: Intel Tiger Lake PCH CNVi WiFi driver: iwlwifi v: kernel
    bus-ID: 00:14.3 chip-ID: 8086:43f0 class-ID: 0280
  IF: wlan0 state: down mac: <filter>
  Device-2: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet
    vendor: Lenovo driver: r8169 v: kernel pcie: gen: 1 speed: 2.5 GT/s
    lanes: 1 port: 3000 bus-ID: 58:00.0 chip-ID: 10ec:8168 class-ID: 0200
  IF: enp88s0 state: up speed: 1000 Mbps duplex: full mac: <filter>
Bluetooth:
  Device-1: Intel AX201 Bluetooth type: USB driver: btusb v: 0.8
    bus-ID: 3-14:4 chip-ID: 8087:0026 class-ID: e001
  Report: rfkill ID: hci0 rfk-id: 3 state: up address: see --recommends
Drives:
  Local Storage: total: 1.16 TiB used: 585.47 GiB (49.3%)
  SMART Message: Unable to run smartctl. Root privileges required.
  ID-1: /dev/nvme0n1 maj-min: 259:0 vendor: Samsung model: SSD 970 EVO Plus
    250GB size: 232.89 GiB block-size: physical: 512 B logical: 512 B
    speed: 31.6 Gb/s lanes: 4 type: SSD serial: <filter> rev: 2B2QEXM7
    temp: 35.9 C scheme: GPT
  ID-2: /dev/nvme1n1 maj-min: 259:3 vendor: SK Hynix model: HFS001TDE9X084N
    size: 953.87 GiB block-size: physical: 512 B logical: 512 B
    speed: 31.6 Gb/s lanes: 4 type: SSD serial: <filter> rev: 41010C22
    temp: 37.9 C scheme: GPT
Partition:
  ID-1: / raw-size: 232.38 GiB size: 227.68 GiB (97.97%) used: 160.42 GiB
    (70.5%) fs: ext4 dev: /dev/nvme0n1p2 maj-min: 259:2
  ID-2: /boot raw-size: 511 MiB size: 510 MiB (99.80%) used: 87.6 MiB
    (17.2%) fs: vfat dev: /dev/nvme0n1p1 maj-min: 259:1
Swap:
  Kernel: swappiness: 60 (default) cache-pressure: 100 (default)
  ID-1: swap-1 type: zram size: 4 GiB used: 0 KiB (0.0%) priority: 100
    dev: /dev/zram0
Sensors:
  System Temperatures: cpu: 39.0 C mobo: N/A
  Fan Speeds (RPM): N/A
Info:
  Processes: 324 Uptime: 8m wakeups: 11 Memory: 15.48 GiB used: 2.85 GiB
  (18.4%) Init: systemd v: 251 default: graphical tool: systemctl
  Compilers: gcc: 12.2.0 clang: 14.0.6 Packages: pm: pacman pkgs: 1241
  libs: 421 tools: gnome-software,pamac,yay pm: flatpak pkgs: 0 Shell: Bash
  v: 5.1.16 running-in: gnome-terminal inxi: 3.3.21

What’s the output of nvidia-smi?

//EDIT: I guess you’re bottlenecked by the CPU, which appears to be locked at 2.3GHz with the PERFORMANCE governor. Is the CPU never going higher than 2.3GHz?

No wonder, because you use RTX 3060 Max-Q (RTX 3060 mobile) for the laptop, the limit of its power consumption is maximum 80 watts unlike RTX 3060.

Desktop GPU has more power and consumes more than mobile GPU, but the battery drains quickly.

RTX 3060 Max-Q < RTX 3060

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   64C    P3    25W /  N/A |   2948MiB /  6144MiB |     40%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1332      G   /usr/lib/Xorg                      44MiB |
|    0   N/A  N/A      9448    C+G   ...ine\game\client\eso64.exe     2897MiB |
+-----------------------------------------------------------------------------+

During high intesity tasks like gaming or video rendering, the CPU is often, if not always at 4.2GHz.

That’s what I thought initially, but it is supposedly a 115W card, when the real RTX 3060 is 170W.

I see you have a game running, yet it is in P3 mode, this is “wrong” for high performance demand.

The GPU performance state APIs are used to get and set various performance levels on a per-GPU basis. P-States are GPU active/executing performance capability and power consumption states.

P-States range from P0 to P15, with P0 being the highest performance/power state, and P15 being the lowest performance/power state. Each P-State maps to a performance level. Not all P-States are available on a given system. The definition of each P-States are currently as follows:

    P0/P1 - Maximum 3D performance
    P2/P3 - Balanced 3D performance-power
    P8 - Basic HD video playback
    P10 - DVD playback
    P12 - Minimum idle power consumption

You want P0.

Are you plugged in when you play, or are you trying to game on battery?

No, that is TDP, but it is not power consumption.

TDP != Power

Yes I am always plugged it.

Mind you, that I am able to use Fn+Q to switch performance modes, and when using the red High Performance mode, nvidia-smi does list N/A 53C P0 79W / N/A

Then maybe it is the max it is allowed to pump. Not all cards are equal, especially in the laptop world.

What performance mode and program does your key binding enable/use? Is it in your system, or is it something integrated in the laptop?

Admittedly, when I say I’ve seen it run at 120W and higher, it was only while on Windows… Might it be that this is the most that it is able to do on Linux?

I’m not sure, there is no GUI, I just use Fn+Q and the led indicator inside the power button cycles between blue=quite mode, white=balanced mode and red=performance mode. I am able to verify this through nvidia-smi which itself lists p0 when in red, p3 when white and p8 when blue.

What I presume is that this is something that’s inside the kernel itself, allowing for switching power modes.

I don’t know from what you describe this is something internal to the laptop but then how does it work is a mystery to me.

Is that the unofficial overclock that you manually enabled it on Windows?
It causes very high temperature and would shorten the lifespan of GPU in laptop. (If you not care about the temperature, then fine)

NVIDIA driver is a proprietary software and never built in Kernels except AMD, but use dkms and driver as extra packages.

Now since you put it that way, and seeing how the games I care about run at high to ultra settings, capped at the 144Hz refresh rate of the display… I think I’m fine like this.

This is apparently a known issue and it also appears to have been addressed (sort of).

While reading this discussion I came upon this article.

TLDR:

systemctl enable nvidia-powerd.service
systemctl start nvidia-powerd.service

One reboot later and my GPU can now use up to 100W of power. If it can reach the Lenovo advertised 115W+15W (dynamic boost) remains to be seen, but this is still progress and I’ll take it.

EDIT: Only appears to be working on INTEL based systems.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.