An error message flashed up on the screen when I shut down the PC. The message said it was definitely due to the script /usr/lib/systemd/system-shutdown/nvidia.shutdown, so I removed it, and the message was gone. However, every time I did an update, this script was created automatically and the message appeared again. How can I prevent this script from being recreated after an update?
I used to create the script following this post, but I deleted it some time ago when it was no longer necessary. I also followed this post to find out that the script is owned by nvidia-utils 510.73.05-1. Do I have to remove the script every time after an update to fix it? Why was the script included in nvidia-utils even it was not needed anymore? Please share your thoughts if you have any ideas on this issue. Thank you!
In your first link, there is an explanation on why this script was proposed.
If it is indeed no longer necessary (I must admit that I do not get any error messages on shutdown here) please kindly contact the nvidia-utils 510.73.05-1 maintainer (for example, by writing an issue report at Issues · Packages / Extra / nvidia-utils · GitLab) to have that script removed again.
Since /usr/lib/systemd/ will be overwritten on every update, the custom dir would be /etc/systemd/. That one gets not overwritten. By creating a similiar dir path, but creating an empty file, it should priorize the one at /etc over /usr/lib.
Yes as others asked, what is the exact error message you say it produces on your computer (but apparently not on all the other Nvidia users computers).
If OP has the same error log on shutdown as I do (I am pretty sure that it is), then the exact error message would be: /usr/lib/systemd/system-shutdown/nvidia.shutdown failed with exit status 1.
This happens always before shutdown, but sometimes the PC takes 2 seconds, other times 2 mins until the message is displayed for 2 seconds and then shutdowns completely.
The message was /usr/lib/systemd/system-shutdown/nvidia.shutdown failed with exit status 1. After I updated to the latest version (Linux 5.17.4-1), two more lines were displayed above:
Broadcast message from xxx@xxx-pc on pts/0 (date & time):
The system is going down for poweroff NOW!
[some numbers][some numbers]/usr/lib/systemd/system-shutdown/nvidia.shutdown failed with exit status 1
Thanks for your reply. I tried what you suggested, but it didn’t work. It seems /usr/lib/systemd/system-shutdown/nvidia.shutdown would still be run on shutdown.
That’s because I removed the script and didn’t produce anything undesired.
The message was /usr/lib/systemd/system-shutdown/nvidia.shutdown failed with exit status 1. After I updated to the latest version (Linux 5.17.4-1), two more lines were displayed above:
Broadcast message from xxx@xxx-pc on pts/0 (date & time):
The system is going down for poweroff NOW!
[some numbers][some numbers]/usr/lib/systemd/system-shutdown/nvidia.shutdown failed with exit status 1
Thank you. How can I create or output the journal logs for shutdown?
Consequently, that error it was supposed to prevent, is back:
[35509.312925] sd-umoun[56020]: Failed to unmount /oldroot: Device or resource busy
[35509.317505] sd-umoun[56021]: Failed to unmount /oldroot/sys: Device or resource busy
[35509.322349] shutdown[1]: Failed to finalize file systems, ignoring.
Looks like it is not needed on all machines and produces an error when it is not needed (i.e. /oldroot already unmounted).
Hence, it might make sense to check in the script if /oldroot is still mounted before trying to unload the modules.
You got the story completely wrong, you have the error because there is no more the file that was unloading Nvidia modules at shutdown. Personally I will recreate the file so this error stops coming back.
#!/bin/sh
#
# Remove all Nvidia modules on shutdown to avoid errors like
#
# sd-umoun: Failed to unmount /oldroot: Device or resource busy
# sd-umoun: Failed to unmount /oldroot/sys: Device or resource busy
# shutdown: Failed to finalize file systems, ignoring.
#
for MODULE in nvidia_drm nvidia_modeset nvidia_uvm nvidia
do
rmmod $MODULE
done
Guess you also got at least me wrong.
In a nutshell: After introduction of that script in the driver package, there were some user complaints about the script exiting with exit code 1 (meaning that there is either no /oldroot or the modules are already unloaded).
As a consequence, the package maintainer decided to withdraw the script - which, of course, causes that error with /oldroot being busy to recur on my machine.
Hence, for wider use of that script, it might make sense to introduce a check if unloading the modules is necessary at all, for example like this:
for MODULE in nvidia_drm nvidia_modeset nvidia_uvm nvidia
do
if lsmod | grep "$MODULE" &> /dev/null ; then
rmmod $MODULE
fi
done
Checking for the loaded modules one by one may look a bit cumbersome but unfortunately, it does not seem possible to check for the presence of /oldroot as this mountpoint is not visible to commands like mount or mountpoint.