SDDM very slow to show up

sddm

#1

[EDIT]
This issue started on a manjaro32 install with SDDM and LXQt, but then I checked an x86_64 install of Manjaro KDE and there it happens, too.

The SDDM greeter is very slow to start up, more than a minute on both machines. I get a blinking cursor after the normal boot messages for a long time, then the greeter appears. It appears faster (verified on the 64bit system) when I press Ctrl + Alt + F7, then Ctrl + Alt + F7 or the other way round.
[/EDIT]

Slow startup

~ >>> systemd-analyze blame                                                                                                                                                               [1]
           688ms tlp.service
           577ms lvm2-monitor.service
           571ms dev-sdb2.device
           379ms NetworkManager.service
           303ms systemd-logind.service
           284ms ldconfig.service
           230ms systemd-journal-flush.service
           132ms systemd-udevd.service
           123ms polkit.service
           108ms systemd-journald.service
           106ms user@974.service
           104ms systemd-udev-trigger.service
            92ms systemd-remount-fs.service
            92ms user@1000.service
            77ms systemd-sysusers.service
            75ms systemd-fsck@dev-disk-by\x2duuid-e3e935ea\x2d979f\x2d4e88\x2da76b\x2d5fabfa5c6887.service
            74ms boot.mount
            72ms home.mount
            69ms systemd-modules-load.service
            56ms var-cache-pacman-pkg.mount
            42ms systemd-user-sessions.service
            42ms maia-console@tty1.service
            41ms systemd-tmpfiles-setup.service
            31ms systemd-journal-catalog-update.service
            23ms systemd-sysctl.service
            22ms systemd-update-done.service
            20ms sys-kernel-debug.mount
            18ms dev-mqueue.mount
            16ms systemd-update-utmp.service
            16ms systemd-tmpfiles-setup-dev.service
            14ms sys-kernel-config.mount
            12ms kmod-static-nodes.service
            12ms dev-hugepages.mount
            12ms tmp.mount
            10ms systemd-random-seed.service
~ >>>                                       

no idea what causes this.


[Testing Update x32] 2018-05-01 to 09 - Kernels, Keyring, Samba, Plasma
Cant login in tty, login timed out
#2

I can’t understand why does it take so long to start.

To be exact: The slowdown is that it takes the SDDM greeter more than one minute to appear.

systemd-analyze indicates that it is the NetworkManager

eugen@manj Linux 4.14.38-1-MANJARO i686 18.0-alpha-1 Illyria
~ >>> systemd-analyze blame                                                                                                                                                                                                                 
           463ms NetworkManager.service
           377ms dev-sdb2.device
           369ms systemd-logind.service
           128ms polkit.service
           113ms systemd-udev-trigger.service
           108ms user@974.service
           102ms systemd-journald.service
            97ms home.mount
            97ms user@1000.service
            96ms systemd-journal-flush.service
            71ms systemd-fsck@dev-disk-by\x2duuid-e3e935ea\x2d979f\x2d4e88\x2da76b\x2d5fabfa5c6887.service
            61ms var-cache-pacman-pkg.mount
            58ms systemd-modules-load.service
            51ms systemd-udevd.service
            46ms boot.mount
            41ms systemd-tmpfiles-setup.service
            27ms systemd-user-sessions.service
            26ms dev-mqueue.mount
            26ms systemd-tmpfiles-setup-dev.service
            25ms sys-kernel-debug.mount
            21ms systemd-update-utmp.service
            19ms maia-console@tty1.service
            16ms systemd-sysctl.service
            14ms systemd-remount-fs.service
            14ms dev-hugepages.mount
            10ms systemd-random-seed.service
             8ms kmod-static-nodes.service
             4ms sys-kernel-config.mount
             3ms tmp.mount
~ >>> systemd-analyze --user blame                                                                                                                                                                                                          
             5ms xdg-user-dirs-update.service
~ >>> systemd-analyze critical chain                                                                                                                                                                                                        
Unknown operation critical.
~ >>> systemd-analyze critical-chain                                                                                                                                                                                                     [1]
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

graphical.target @1.221s
└─sddm.service @1.221s
  └─systemd-user-sessions.service @1.191s +27ms
    └─network.target @1.190s
      └─NetworkManager.service @726ms +463ms
        └─dbus.service @716ms
          └─basic.target @709ms
            └─sockets.target @709ms
              └─dbus.socket @709ms
                └─sysinit.target @707ms
                  └─systemd-update-utmp.service @686ms +21ms
                    └─systemd-tmpfiles-setup.service @641ms +41ms
                      └─local-fs.target @638ms
                        └─home.mount @539ms +97ms
                          └─dev-disk-by\x2duuid-3986e38c\x2d4f4f\x2d45c9\x2dbe52\x2d78e6720969e3.device @527ms
~ >>>  

#3

How long did it take prior to this update?

My VM hits the login screen within a few seconds so I’ve never questioned this.

$ systemd-analyze
Startup finished in 1.787s (kernel) + 3.941s (userspace) = 5.728s
graphical.target reached after 2.277s in userspace
$ systemd-analyze blame
          1.010s alsa-restore.service
           868ms lvm2-monitor.service
           839ms tlp.service
           829ms dev-sda1.device
           410ms NetworkManager.service
           370ms systemd-journal-flush.service
           366ms ModemManager.service
           200ms udisks2.service
           182ms avahi-daemon.service
           174ms systemd-logind.service
           148ms polkit.service
           121ms systemd-modules-load.service
            98ms systemd-journald.service
            87ms systemd-udevd.service
            86ms ntpd.service
            81ms systemd-tmpfiles-setup-dev.service
            78ms systemd-udev-trigger.service
            78ms systemd-binfmt.service
            69ms upower.service
            62ms dev-mqueue.mount
            61ms sys-kernel-debug.mount
            60ms kmod-static-nodes.service
            59ms dev-hugepages.mount
            44ms systemd-remount-fs.service
            36ms systemd-tmpfiles-setup.service
            32ms user@1000.service
            25ms systemd-user-sessions.service
            20ms dev-disk-by\x2duuid-b08657a2\x2d2d6a\x2d4bb2\x2d9f5d\x2de093b3c16cfc.swap
            18ms systemd-update-utmp.service
            15ms rtkit-daemon.service
            15ms systemd-sysctl.service
            14ms systemd-random-seed.service
             4ms sys-kernel-config.mount
             3ms proc-sys-fs-binfmt_misc.mount
             2ms tmp.mount

Seems I have a similar sddm “roadblock”, over 2s for me.

$ systemd-analyze critical-chain
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

graphical.target @2.277s
└─sddm.service @2.277s
  └─systemd-user-sessions.service @2.249s +25ms
    └─network.target @2.248s
      └─NetworkManager.service @1.837s +410ms
        └─dbus.service @1.834s
          └─basic.target @1.830s
            └─paths.target @1.830s
              └─org.cups.cupsd.path @1.830s
                └─sysinit.target @1.828s
                  └─systemd-update-utmp.service @1.809s +18ms
                    └─systemd-tmpfiles-setup.service @1.772s +36ms
                      └─local-fs.target @1.772s
                        └─local-fs-pre.target @1.772s
                          └─lvm2-monitor.service @903ms +868ms
                            └─lvm2-lvmetad.service @964ms
                              └─systemd-journald.socket @894ms
                                └─-.mount @873ms
                                  └─system.slice @873ms
                                    └─-.slice @873ms

#4

It is a fresh install, it was like that from begin on.
I just installed sddm-classic, but it is the same.


#5

It’s hillarious! I disabled sddm, installed and enabled lxdm (gtk2) and the greeter starts instantly.

The only “naugty” thing I did to sddm was that I copied sddm.conf to /etc/sddm.conf and edited it, changed the wallpaper for example.


#6

It’s not related to this, is it?

Though, people in that thread are having issues with LightDM too…


#7

The symptoms are different, a key press doesn’t make the SDDM greeter appear, it simply appears after a long while.

I ran sudo mv /etc/sddm.conf /etc/sddm.conf.bak
but the delay is the same.

I will try with haveged now.

Aaand, @jonathon, sudo systemctl enable haveged did it! Now the SDDM greeter appeared without any delay.


#8

There’s something very wrong if haveged is needed for DMs… so there’s obviously something within this update set.

SDDM and LightDM haven’t changed, so it’s more likely to be kernel-related. Can you downgrade the kernel and see if that changes anything?


#9

I will try first to verify it happens also with lightdm.
Update: Lightdm works without delay, haveged disabled.


#10

I downgraded to Stable: https://hastebin.com/efinexozuj.coffeescript
Let’s see how it works.


#11

Same delay after the downgrade. (I need to go now.)


#12

It happens also on a 64bit system! A Manjaro KDE install, Intel Core i5-3317U, SDDM starts with the same delay.
I tried to downgrade individual packages like sddm-kcm, mesa and libva-mesa-driver, no success. Can you guess, what to try to downgrade? My pacman.log from 29-04-2018 on: https://hastebin.com/lureyenadi.php
This is very weird, what helps is pressing Ctrl+Alt+F7, then Ctrl+Alt+F2 or the other way round. Then SDDM greeter appears faster, after a second maybe.
Not a i686 issue, we need to split it to separate topic. [Done.]

I will try to boot to btrfs snapshots on the 64bit system now: 0205/ 1204/ 2304/ 3004/


#13

@philm is considering enabling haveged by default on new ISOs. I just did a clean install of KDE 17.1.9 and had the delay happened with kernels 4.14.36-1 and 4.16.4-1. haveged was installed but inactive as a service. enabling and starting it fixed this as per the instructions jonathon linked to above


#14

The snapshot 3004 which I created on 2018-04-30 boots without delay. This should help to find the package which needs a downgrade. These are from line 49 on of my pacman.log https://hastebin.com/lureyenadi.php
Branch: Testing


#15

Just curious if it does the same thing if you enable auto login without enabling haveged.


#16

It does the same thing, you look at it as if was the end of a movie :slight_smile: almost 20 seconds on my machine …


#17

It’s a problem for sure.

https://github.com/manjaro/release-plan/issues/197

But actually it is just a workaround, no? Something seems to exhaust the entropy pool by requesting random numbers like crazy. Wondering is this is for normal / for-a-good-reason or if it is a bug…


#18

Can somebody please explain to this simple old man what haveged is and does? I tried to read the arch wiki page but still I have no idea what it is, what it does, what it is needed for.
So please someone, explain it in simple terms so I might be able to understand it.
Thank you so much.


#19

@eugen-b
It may be btrfs, but not exactly sure where the problem is.
Here’s a list of btrfs slow boots sites.
link 1
link 2
link 3
link 4

Happy hunting. Good luck.


#20

This was referenced by philm about an old issue that might be related to this.

https://github.com/systemd/systemd/issues/4167

poettering commented on Jun 27, 2017
There seems to be a disconnect somewhere… Here’s what I am seeing. The machines below are modern and fully patched. They are x86_64 with 4th and 5th gen Core i5’s and Core i7’s.

I think the data is included but not credited to the entropy, since the source can’t be trusted too much… Anyway, this issue here is about something else. And either way, whether to include the CPU’s generator in /dev/urandom and whether to credit the entropy for it, is really a discussion for the kernel folks, userspace should not be involved. LWN had a couple of stories about this btw, try searching there.

@philmmanjaro philmmanjaro referenced this issue in manjaro/release-plan 3 days ago
Closed
XFCE-GTK3 takes longer to load desktop #197