Lightdm wont restart?

Hello I have a few quick questions,
I saw a guide on single gpu- passthrough to a vm and I need to write a script which does these things:

  1. stop display manager
  2. unbind vconsoles
  3. unload amdgpu drivers
  4. reload amdgpu drivers
  5. bind vconsoles
  6. start display manager

I am using XFCE Manjaro so I have lightdm installed. After wondering why my VM doesnt output anything I tried to run the scripts without starting the vm. What I noticed:
When I type “systemctl stop lightdm.service” and try to start it again later it wont work. “systemctl restart lightdm.service” works how it should tho. What is the problem? I get a blackscreen when stopping lightdm but after that my script should start it again, but it doesn’t? The blackscreen persists and I cannot do anything except reboot my machine.
Second question I have is, when I try to unload the amdgpu drivers with modprobe -r, I get the message that it cannot be done, because the drivers are being used. lsmod doesnt list which modules its being used by tho. How do I unload the amdgpu drivers then?
You can see the script I mean at the end of this video. He uses another distro with kde tho, so I had to change a few things, like stopping the lightdm service insted of sddm service etc.

I hope someone can help :slight_smile:

From what you describe, i deduce you have only on GPU … is that correct? If YES, then please read the documentation about PCI passthrough via OVMF - ArchWiki

The one in the video has more than one GPU … and one is used for passthrough to the guest machine, while the main GPU is used for the host.

1 Like

So I have been going at it for a while now and my start script works now… It stops all the graphical services and unloads the amdgpu module while loading the vfio module and detaching and resetting the gpu via virsh nodedev. Now I can boot into my vm and my gpu gets passed through. But when I try to shut down the vm, I get a continuous blackscreen.
I believe it is the navi reset bug, I can unload the vfio drivers and it says kernel modules amdgpu for my graphics card, but they are not in use. I don’t know how to make the graphicscard use the drivers.
“virsh nodedev-reattach” also does not enable them.
I tried to patch my kernel with a patch I found but it won’t compile because of an error.

Yup. Sounds exactly like the problem @bogdancovaciu is describing: only one GPU.

So please read this:

and post some more information so we can see what’s really going on. Now we know the symptom of the disease, but we need some more probing to know where the origin lies…

An inxi --admin --verbosity=7 --filter --no-host --width would be the minimum required information… (Personally Identifiable Information like serial numbers and MAC addresses will be filtered out by the above command)

:+1:

P.S. If you enter a bit more details in your profile, we can also see which Desktop Environment you’re using, which CPU/GPU you have, …

1 Like

Because your host, that starts before and from where you run the vm, is still on. When you turn off your vm, the host will remain on till you shut it down, but because it runs headless it only keeps the screen as is on but without image aka black screen.

To be honest, so far this topic is at least strange. Didn’t watched the video, but the title of it might be misleading for some people, as they might understand that a PC with single GPU can have output on host and vm trough passthrough, while in fact is what most people that read the arch wiki know to be …

1 Like