Systemd 240 update breaks graphical login



I wonder if that is forcing the issue. However, I’d reverted the drop of the value DBUS_SESSION_BUS_ADDRESS in the current systemd packages …

[Unstable Update] 2018-12-14 - Cinnamon, Deepin, QT5, Systemd, KDE-Apps, KDE-Framework, Mesa

I recompiled without that flag and the issue persists. /var/run is a symlink to /run so it shouldn’t have any major effect anyway.


Just tested another install that has KDE Plasma and Enlightenment, and i switch between the two. Same thing as i get in i3 install, the echo ${DBUS_SESSION_BUS_ADDRESS} command prints nothing, but all is ok …


It looks like this is DE-dependent. Some will be looking for the DBUS_SESSION_BUS_ADDRESS variable (which is empty) but anything using dbus should fall back to the dbus default. However, for whatever reason they don’t. This may be a “premature optimisation” on the part of systemd or, more likely, it’s a bug in those DE sessions.

I can log in without issue with systemd=240.1-3 .

systemd 240 has also just hit Debian Sid so it will be interesting to see what happens over there.


They opened an issue for it on upstream. That is the issue posted already :wink: Also for me the revert helped. So we will see what will happen for the LVM regression.


I confirm!
Just updated Budgie and now it works, and the unix path is correct :slight_smile:


Where abouts is it? I don’t see one on the systemd GitHub issue tracker… :see_no_evil:


Right, I mixed it up with the LVM regression :man_facepalming:


Heh… reports coming in… :rofl:

Systemd-240: unexpected udevd errors

Installing dbus-X11, then upgrading systemd worked, the ACPI, amdgpu and hwmon errors I noted yesterday are gone, but the mdadm errors came back. This concerns me, as the sda/sdb devices are the underlying devices for my RAID arrays.

And, a couple of new ones:

Dec 23 11:10:10 Jammin1 NetworkManager[764]: Failed to get connection to xfconfd: Cannot autolaunch D-Bus without X11 $DISPLAY

Also for: lightdm, accounts-daemon, pamac-system-daemon, and udisksd.

And this:

Dec 23 11:10:26 Jammin1 systemd[1032]: gvfs-daemon.service: Main process exited, code=killed, status=15/TERM


[merell@Jammin1 ~]$ sudo systemctl status gvfs-daemon.service
Unit gvfs-daemon.service could not be found.
[merell@Jammin1 ~]$ systemctl --user status gvfs-daemon.service
● gvfs-daemon.service - Virtual filesystem service
   Loaded: loaded (/usr/lib/systemd/user/gvfs-daemon.service; static; vendor preset: enabled)
   Active: active (running) since Sun 2018-12-23 11:10:16 EST; 1h 42min ago
 Main PID: 1317 (gvfsd)
   CGroup: /user.slice/user-1000.slice/user@1000.service/gvfs-daemon.service
           ├─1317 /usr/lib/gvfsd
           ├─1325 /usr/lib/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes
           └─1612 /usr/lib/gvfsd-trash --spawner :1.4 /org/gtk/gvfs/exec_spaw/0


But you did a new release of systemd ?

Got today
systemd (240.0-1 -> 240.1-3)
and i’m back to normal - as far as i can see after login.


Yes - if you read the thread you’ll see a 240.1-3 package release.


overlooked :dizzy_face:


Not sure if fully related but systemd-nspawn is due to be reverted by Linus shortly;


It’s not related, and it’s not reverting systemd-nspawn. :wink:


It is more related to a server issue. I’ve now reverted it on our end, as it actually had broken user space.


Thanks for the clarification.


With Manjaro now building its own systemd packages these types of issues will become more prevalent IMO, previously something like this would have been caught in Arch Testing first.

FWIW systemd 240 has only now hit Arch Testing.

The more core packages Manjaro builds itself the unstable branch will become more equivalent to Arch Testing rather than Arch Stable.


We follow upstream, which is Fedora for systemd, and since we have our own pools for, it is needed that we build systemd on our own.


Same here. Archlinux + XFCE + lightdm + systemd 240. Downgrading to 239 works for now.