I have been reviewing the page ‘Boot_process ’ and I am not sure what I should do to improve the boot speed.
My current speed is
systemd-analyze:
Startup finished in 7.435s (firmware) + 6.234s (loader) + 4.721s (kernel) + 3.122s (userspace) = 21.512s
graphical.target reached after 3.108s in userspace.
systemd-analyze critical-chain:
graphical.target @3.108s
└─multi-user.target @3.108s
└─cups.service @3.022s +86ms
└─network.target @3.021s
└─NetworkManager.service @2.713s +306ms
└─basic.target @2.713s
└─dbus-broker.service @2.692s +19ms
└─dbus.socket @2.690s
└─sysinit.target @2.689s
└─systemd-timesyncd.service @2.609s +79ms
└─systemd-tmpfiles-setup.service @2.371s +234ms
└─run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount @2.406s
systemd-analyze blame:
1.065s dev-nvme0n1p2.device
531ms boot-efi.mount
359ms systemd-udev-trigger.service
330ms systemd-udev-load-credentials.service
306ms NetworkManager.service
1.065s dev-nvme0n1p2.device
531ms boot-efi.mount
359ms systemd-udev-trigger.service
330ms systemd-udev-load-credentials.service
306ms NetworkManager.service
300ms systemd-remount-fs.service
299ms systemd-tmpfiles-setup-dev-early.service
278ms user@1000.service
271ms systemd-modules-load.service
234ms systemd-tmpfiles-setup.service
192ms modprobe@fuse.service
155ms lvm2-monitor.service
152ms modprobe@drm.service
151ms systemd-sysctl.service
139ms systemd-fsck@dev-disk-by\x2duuid-9E29\x2dD933.service
125ms systemd-journal-flush.service
110ms NetworkManager-wait-online.service
101ms systemd-tmpfiles-setup-dev.service
90ms upower.service
88ms systemd-vconsole-setup.service
86ms systemd-tmpfiles-clean.service
86ms cups.service
84ms systemd-udevd.service
79ms systemd-timesyncd.service
78ms systemd-user-sessions.service
77ms systemd-update-utmp.service
76ms user-runtime-dir@1000.service
71ms systemd-userdbd.service
66ms kmod-static-nodes.service
63ms modprobe@configfs.service
63ms systemd-random-seed.service
59ms systemd-journald.service
55ms power-profiles-daemon.service
55ms ModemManager.service
53ms sys-fs-fuse-connections.mount
53ms sys-kernel-config.mount
52ms systemd-backlight@backlight:intel_backlight.service
52ms modprobe@loop.service
47ms dev-disk-by\x2duuid-22dd78cc\x2dbef7\x2d4c7e\x2daa88\x2d541ac97a7585.swap
46ms udisks2.service
43ms systemd-rfkill.service
37ms polkit.service
36ms alsa-restore.service
33ms tmp.mount
27ms systemd-logind.service
lsblk -fs:
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
nvme0n1p1 vfat FAT32 9E29-D933 299.1M 0% /boot/efi
└─nvme0n1
nvme0n1p2 ext4 1.0 8eeb8888-4cc1-4e3c-92c2-682098fcdb5f 716.5G 18% /
└─nvme0n1
nvme0n1p3 swap 1 swap 22dd78cc-bef7-4c7e-aa88-541ac97a7585 [SWAP]
└─nvme0n1
Illustriousloop:
I have been reviewing the page ‘Boot_process ’ and I am not sure what I should do to improve the boot speed.
My current speed is
systemd-analyze:
Startup finished in 7.435s (firmware) + 6.234s (loader) + 4.721s (kernel) + 3.122s (userspace) = 21.512s
graphical.target reached after 3.108s in userspace.
There seems nothing wrong with your boot speed.
Why do you feel it should be faster?
cscs
6 October 2024 03:59
3
Do you need cups started at boot?
Is there a reason you have the service enabled rather than the socket?
Do you use logical volumes ?
Firmware and loader take some time. Please provide output of
inxi -Fza
Just for fun haha. Besides, I feel like it’s slow compared to what I remember from the Windows 11 startup.
inxi -Fza:
System:
Kernel: 6.6.52-1-MANJARO arch: x86_64 bits: 64 compiler: gcc v: 14.2.1
clocksource: tsc avail: acpi_pm
parameters: BOOT_IMAGE=/boot/vmlinuz-6.6-x86_64
root=UUID=8eeb8888-4cc1-4e3c-92c2-682098fcdb5f rw quiet acpi=force
apm=power_off resume=UUID=22dd78cc-bef7-4c7e-aa88-541ac97a7585
udev.log_priority=3 nouveau.modeset=0 intel_idle.max_cstate=4
Desktop: KDE Plasma v: 6.1.5 tk: Qt v: N/A info: frameworks v: 6.5.0
wm: kwin_x11 with: krunner vt: 2 dm: SDDM Distro: Manjaro base: Arch Linux
Machine:
Type: Laptop System: HP product: Victus by HP Gaming Laptop 15-fa1xxx v: N/A
serial: <superuser required> Chassis: type: 10 serial: <superuser required>
Mobo: HP model: 8C2D v: 63.33 serial: <superuser required>
part-nu: A14LKLA#ABM uuid: <superuser required> UEFI: AMI v: F.16
date: 03/19/2024
Battery:
ID-1: BAT0 charge: 70.1 Wh (100.0%) condition: 70.1/70.1 Wh (100.0%)
volts: 17.3 min: 15.4 model: HP Primary type: Li-ion serial: <filter>
status: full cycles: 15
CPU:
Info: model: 12th Gen Intel Core i5-12450H bits: 64 type: MST AMCP
arch: Alder Lake gen: core 12 level: v3 note: check built: 2021+
process: Intel 7 (10nm ESF) family: 6 model-id: 0x9A (154) stepping: 3
microcode: 0x434
Topology: cpus: 1x dies: 1 clusters: 5 cores: 8 threads: 12 mt: 4 tpc: 2
st: 4 smt: enabled cache: L1: 704 KiB desc: d-4x32 KiB, 4x48 KiB; i-4x32
KiB, 4x64 KiB L2: 7 MiB desc: 4x1.2 MiB, 1x2 MiB L3: 12 MiB desc: 1x12 MiB
Speed (MHz): avg: 484 min/max: 400/4400:3300 scaling: driver: intel_pstate
governor: powersave cores: 1: 484 2: 484 3: 484 4: 484 5: 484 6: 484 7: 484
8: 484 9: 484 10: 484 11: 484 12: 484 bogomips: 59916
Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
Vulnerabilities:
Type: gather_data_sampling status: Not affected
Type: itlb_multihit status: Not affected
Type: l1tf status: Not affected
Type: mds status: Not affected
Type: meltdown status: Not affected
Type: mmio_stale_data status: Not affected
Type: reg_file_data_sampling mitigation: Clear Register File
Type: retbleed status: Not affected
Type: spec_rstack_overflow status: Not affected
Type: spec_store_bypass mitigation: Speculative Store Bypass disabled via
prctl
Type: spectre_v1 mitigation: usercopy/swapgs barriers and __user pointer
sanitization
Type: spectre_v2 mitigation: Enhanced / Automatic IBRS; IBPB:
conditional; RSB filling; PBRSB-eIBRS: SW sequence; BHI: BHI_DIS_S
Type: srbds status: Not affected
Type: tsx_async_abort status: Not affected
Graphics:
Device-1: Intel Alder Lake-P GT1 [UHD Graphics] vendor: Hewlett-Packard
driver: i915 v: kernel arch: Gen-12.1 process: Intel 10nm built: 2020-21
ports: active: eDP-1 empty: DP-1 bus-ID: 00:02.0 chip-ID: 8086:46a3
class-ID: 0300
Device-2: NVIDIA AD107M [GeForce RTX 4050 Max-Q / Mobile]
vendor: Hewlett-Packard driver: nvidia v: 550.120
alternate: nouveau,nvidia_drm non-free: 550.xx+
status: current (as of 2024-09) arch: Lovelace code: AD1xx
process: TSMC n4 (5nm) built: 2022+ pcie: gen: 1 speed: 2.5 GT/s lanes: 8
link-max: gen: 4 speed: 16 GT/s bus-ID: 01:00.0 chip-ID: 10de:28a1
class-ID: 0300
Device-3: Luxvisions Innotech HP Wide Vision HD Camera driver: uvcvideo
type: USB rev: 2.0 speed: 480 Mb/s lanes: 1 mode: 2.0 bus-ID: 3-6:3
chip-ID: 30c9:0069 class-ID: fe01 serial: <filter>
Display: x11 server: X.Org v: 21.1.13 with: Xwayland v: 24.1.2
compositor: kwin_x11 driver: X: loaded: modesetting,nvidia
alternate: fbdev,nouveau,nv,vesa dri: iris gpu: i915 display-ID: :0
screens: 1
Screen-1: 0 s-res: 1920x1080 s-dpi: 96 s-size: 508x285mm (20.00x11.22")
s-diag: 582mm (22.93")
Monitor-1: eDP-1 model: BOE Display 0x094d built: 2020 res: 1920x1080
hz: 144 dpi: 142 gamma: 1.2 size: 344x194mm (13.54x7.64")
diag: 395mm (15.5") ratio: 16:9 modes: 1920x1080
API: EGL v: 1.5 hw: drv: intel iris drv: nvidia platforms: device: 0
drv: nvidia device: 1 drv: iris device: 3 drv: swrast gbm: drv: iris
surfaceless: drv: nvidia x11: drv: iris inactive: wayland,device-2
API: OpenGL v: 4.6.0 compat-v: 4.5 vendor: intel mesa v: 24.2.2-arch1.1
glx-v: 1.4 direct-render: yes renderer: Mesa Intel Graphics (ADL GT2)
device-ID: 8086:46a3 memory: 7.45 GiB unified: yes
API: Vulkan v: 1.3.295 layers: 1 device: 0 type: discrete-gpu name: NVIDIA
GeForce RTX 4050 Laptop GPU driver: nvidia v: 550.120 device-ID: 10de:28a1
surfaces: xcb,xlib
Audio:
Device-1: Intel Alder Lake PCH-P High Definition Audio
vendor: Hewlett-Packard driver: sof-audio-pci-intel-tgl
alternate: snd_hda_intel,snd_sof_pci_intel_tgl bus-ID: 00:1f.3
chip-ID: 8086:51c8 class-ID: 0401
Device-2: NVIDIA vendor: Hewlett-Packard driver: snd_hda_intel v: kernel
pcie: gen: 4 speed: 16 GT/s lanes: 8 bus-ID: 01:00.1 chip-ID: 10de:22be
class-ID: 0403
API: ALSA v: k6.6.52-1-MANJARO status: kernel-api with: aoss
type: oss-emulator tools: alsactl,alsamixer,amixer
Server-1: JACK v: 1.9.22 status: off tools: N/A
Server-2: PipeWire v: 1.2.3 status: active with: 1: pipewire-pulse
status: active 2: wireplumber status: active 3: pipewire-alsa type: plugin
tools: pactl,pw-cat,pw-cli,wpctl
Network:
Device-1: Intel Alder Lake-P PCH CNVi WiFi driver: iwlwifi v: kernel
bus-ID: 00:14.3 chip-ID: 8086:51f0 class-ID: 0280
IF: wlp0s20f3 state: up mac: <filter>
Device-2: Realtek RTL8111/8168/8211/8411 PCI Express Gigabit Ethernet
vendor: Hewlett-Packard driver: r8169 v: kernel pcie: gen: 1 speed: 2.5 GT/s
lanes: 1 port: 3000 bus-ID: 04:00.0 chip-ID: 10ec:8168 class-ID: 0200
IF: eno1 state: down mac: <filter>
Info: services: NetworkManager, systemd-timesyncd, wpa_supplicant
Bluetooth:
Device-1: Intel AX211 Bluetooth driver: btusb v: 0.8 type: USB rev: 2.0
speed: 12 Mb/s lanes: 1 mode: 1.1 bus-ID: 3-10:5 chip-ID: 8087:0033
class-ID: e001
Report: rfkill ID: hci0 rfk-id: 0 state: down bt-service: enabled,running
rfk-block: hardware: no software: yes address: see --recommends
Drives:
Local Storage: total: 1.84 TiB used: 366.21 GiB (19.4%)
SMART Message: Required tool smartctl not installed. Check --recommends
ID-1: /dev/nvme0n1 maj-min: 259:0 vendor: KIOXIA model: N/A
size: 953.87 GiB block-size: physical: 512 B logical: 512 B speed: 63.2 Gb/s
lanes: 4 tech: SSD serial: <filter> fw-rev: HP02AN00 temp: 32.9 C
scheme: GPT
ID-2: /dev/sda maj-min: 8:0 vendor: Samsung model: SSD 970 EVO Plus 1TB
size: 931.51 GiB block-size: physical: 512 B logical: 512 B type: USB
rev: 3.2 spd: 5 Gb/s lanes: 1 mode: 3.2 gen-1x1 tech: SSD serial: <filter>
fw-rev: 1.00 scheme: GPT
Partition:
ID-1: / raw-size: 944.77 GiB size: 928.86 GiB (98.32%)
used: 190.96 GiB (20.6%) fs: ext4 dev: /dev/nvme0n1p2 maj-min: 259:2
ID-2: /boot/efi raw-size: 300 MiB size: 299.4 MiB (99.80%)
used: 288 KiB (0.1%) fs: vfat dev: /dev/nvme0n1p1 maj-min: 259:1
Swap:
Kernel: swappiness: 60 (default) cache-pressure: 100 (default) zswap: yes
compressor: zstd max-pool: 20%
ID-1: swap-1 type: partition size: 8.8 GiB used: 699.1 MiB (7.8%)
priority: -2 dev: /dev/nvme0n1p3 maj-min: 259:3
Sensors:
System Temperatures: cpu: 38.0 C mobo: N/A
Fan Speeds (rpm): cpu: 2198 fan-2: 1998
Info:
Memory: total: 16 GiB note: est. available: 15.26 GiB used: 6.52 GiB (42.7%)
Processes: 293 Power: uptime: 1h 12m states: freeze,mem,disk
suspend: s2idle wakeups: 0 hibernate: platform avail: shutdown, reboot,
suspend, test_resume image: 6.06 GiB services: org_kde_powerdevil,
power-profiles-daemon, upowerd Init: systemd v: 256 default: graphical
tool: systemctl
Packages: pm: pacman pkgs: 1294 libs: 335 tools: yay Compilers:
clang: 18.1.8 gcc: 14.2.1 Shell: Zsh v: 5.9 running-in: wezterm-gui
inxi: 3.3.36
I didn’t know what ‘cups.service’ was. I researched and found out I don’t need it. How do I disable it correctly?
I don’t know if I use it. What are ‘logical volumes’ used for? Are they important?
Windows 11 likely had Fast Startup enabled; a hibernation variant that Microsoft uses to give the illusion of a faster startup.
In fact, when using Fast Startup, the machine never actually shuts down. Instead, it just hibernates and subsequently wakes from hibernation. So, it’s not necessarily a good comparison.
Disabling a few services starting during boot might show minor improvements; as already indicated by posts above.
Switching to Wayland instead of X11 should improve performance, generally, but that probably won’t have much effect on boot speed.
High performance NVMe or SSD drives might enhance read/write, and therefore boot speed to some extent. So, if you have a few thousand dollars to spare, you might shave off a fraction of a second here and there.
Are we having fun yet?
7-8 seconds seems fine to me.
Cheers.
2 Likes
cscs
6 October 2024 05:11
9
systemctl disable cups.service --now
If you want the socket enabled (will start the service when needed) then
systemctl enable cups.socket --now
Its a way of organizing disks.
Then you probably dont use them. You may wish to retain the ability to work with them … such as an external logical volume.
https://wiki.archlinux.org/title/LVM
1 Like
Do you remember if Windows 11 was using Fast Startup (hybrid hibernation) rather than a full restart?
Boot time depends on a lot of things and will vary from system to system.
Take this system - what is the boot time?
08:05:48 ○ [fh@tiger] ~
$ inxi -SCMm
System:
Host: tiger Kernel: 6.11.2-1-MANJARO arch: x86_64 bits: 64
Desktop: KDE Plasma v: 6.1.5 Distro: Manjaro Linux
Machine:
Type: Desktop System: LENOVO product: 30E000GMMT v: ThinkStation P620
serial: <superuser required>
Mobo: LENOVO model: 1046 v: SBB1C50523 WIN 3556073303264
serial: <superuser required> UEFI: LENOVO v: S07KT5DA date: 04/19/2024
CPU:
Info: 12-core model: AMD Ryzen Threadripper PRO 5945WX s bits: 64
type: MT MCP cache: L2: 6 MiB
Speed (MHz): avg: 1429 min/max: 400/4565 cores: 1: 1429 2: 1429 3: 1429
4: 1429 5: 1429 6: 1429 7: 1429 8: 1429 9: 1429 10: 1429 11: 1429 12: 1429
13: 1429 14: 1429 15: 1429 16: 1429 17: 1429 18: 1429 19: 1429 20: 1429
21: 1429 22: 1429 23: 1429 24: 1429
Memory:
System RAM: total: 64 GiB available: 62.65 GiB used: 3.32 GiB (5.3%)
Message: For most reliable report, use superuser + dmidecode.
Array-1: capacity: 1024 GiB note: check slots: 8 modules: 4
EC: Multi-bit ECC
Device-1: DIMM5 type: no module installed
Device-2: DIMM6 type: no module installed
Device-3: DIMM7 type: DDR4 size: 16 GiB speed: 3200 MT/s
Device-4: DIMM8 type: DDR4 size: 16 GiB speed: 3200 MT/s
Device-5: DIMM4 type: no module installed
Device-6: DIMM3 type: no module installed
Device-7: DIMM2 type: DDR4 size: 16 GiB speed: 3200 MT/s
Device-8: DIMM1 type: DDR4 size: 16 GiB speed: 3200 MT/s
The boot time ...
08:03:07 ○ [fh@tiger] ~
$ systemd-analyze
Startup finished in 48.785s (firmware) + 6.645s (loader) + 1.316s (kernel) + 3.016s (initrd) + 9.294s (userspace) = 1min 9.057s
graphical.target reached after 9.292s in userspace.
08:03:27 ○ [fh@tiger] ~
$ systemd-analyze blame --no-pager
5.399s NetworkManager-wait-online.service
3.426s dev-sdb.device
3.426s dev-disk-by\x2did-ata\x2dSamsung_SSD_840_PRO_Series_S1AXNEAD509881Y.device
3.426s sys-devices-pci0000:00-0000:00:03.1-0000:02:00.0-0000:03:0a.0-0000:06:00.0-ata14-host13-target13:0:0-1…
3.426s dev-disk-by\x2dpath-pci\x2d0000:06:00.0\x2data\x2d6.0.device
3.426s dev-disk-by\x2ddiskseq-2.device
3.426s dev-disk-by\x2did-wwn\x2d0x50025385503607df.device
3.426s dev-disk-by\x2dpath-pci\x2d0000:06:00.0\x2data\x2d6.device
3.423s dev-disk-by\x2ddiskseq-2\x2dpart1.device
3.423s dev-disk-by\x2dlabel-private.device
3.423s dev-disk-by\x2dpath-pci\x2d0000:06:00.0\x2data\x2d6\x2dpart1.device
3.423s dev-disk-by\x2dpath-pci\x2d0000:06:00.0\x2data\x2d6.0\x2dpart-by\x2dpartuuid-0f28e6e1\x2deff8\x2d4fb0\…
3.423s dev-disk-by\x2dpartuuid-0f28e6e1\x2deff8\x2d4fb0\x2d9821\x2d234dd7add7f8.device
3.423s dev-disk-by\x2dpath-pci\x2d0000:06:00.0\x2data\x2d6.0\x2dpart-by\x2dlabel-private.device
3.423s dev-disk-by\x2did-ata\x2dSamsung_SSD_840_PRO_Series_S1AXNEAD509881Y\x2dpart1.device
3.423s dev-sdb1.device
3.423s dev-disk-by\x2dpath-pci\x2d0000:06:00.0\x2data\x2d6.0\x2dpart-by\x2duuid-1f3d1a6e\x2db4b5\x2d46da\x2d9…
3.423s dev-disk-by\x2duuid-1f3d1a6e\x2db4b5\x2d46da\x2d909a\x2d8a87a45dd18b.device
3.423s sys-devices-pci0000:00-0000:00:03.1-0000:02:00.0-0000:03:0a.0-0000:06:00.0-ata14-host13-target13:0:0-1…
3.423s dev-disk-by\x2dpath-pci\x2d0000:06:00.0\x2data\x2d6.0\x2dpart-by\x2dpartlabel-private.device
3.423s dev-disk-by\x2did-wwn\x2d0x50025385503607df\x2dpart1.device
3.423s dev-disk-by\x2dpartlabel-private.device
3.423s dev-disk-by\x2dpath-pci\x2d0000:06:00.0\x2data\x2d6.0\x2dpart-by\x2dpartnum-1.device
3.423s dev-disk-by\x2dpath-pci\x2d0000:06:00.0\x2data\x2d6.0\x2dpart1.device
3.406s dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d4.device
3.406s dev-disk-by\x2did-ata\x2dSamsung_SSD_840_EVO_500GB_mSATA_S1KMNSAFB02237V.device
3.406s dev-sda.device
3.406s dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d4.0.device
3.406s sys-devices-pci0000:00-0000:00:03.1-0000:02:00.0-0000:03:09.0-0000:05:00.0-ata4-host3-target3:0:0-3:0:…
3.406s dev-disk-by\x2ddiskseq-1.device
3.406s dev-disk-by\x2did-wwn\x2d0x5002538844584d30.device
3.397s dev-ttyS3.device
3.397s sys-devices-platform-serial8250-serial8250:0-serial8250:0.3-tty-ttyS3.device
3.396s dev-ttyS2.device
3.396s sys-devices-platform-serial8250-serial8250:0-serial8250:0.2-tty-ttyS2.device
3.396s dev-ttyS1.device
3.396s sys-devices-platform-serial8250-serial8250:0-serial8250:0.1-tty-ttyS1.device
3.395s dev-ttyS0.device
3.395s sys-devices-platform-serial8250-serial8250:0-serial8250:0.0-tty-ttyS0.device
3.392s sys-devices-platform-MSFT0101:00-tpmrm-tpmrm0.device
3.392s dev-tpmrm0.device
3.389s sys-module-fuse.device
3.389s sys-module-configfs.device
3.382s sys-devices-pci0000:00-0000:00:03.1-0000:02:00.0-0000:03:09.0-0000:05:00.0-ata4-host3-target3:0:0-3:0:…
3.382s dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d4.0\x2dpart1.device
3.382s dev-disk-by\x2did-ata\x2dSamsung_SSD_840_EVO_500GB_mSATA_S1KMNSAFB02237V\x2dpart1.device
3.382s dev-sda1.device
3.382s dev-disk-by\x2dpartuuid-83088f4c\x2d115f\x2d463a\x2d8e84\x2dc25b9fa0a772.device
3.382s dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d4.0\x2dpart-by\x2dpartuuid-83088f4c\x2d115f\x2d463a\…
3.382s dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d4\x2dpart1.device
3.382s dev-disk-by\x2did-wwn\x2d0x5002538844584d30\x2dpart1.device
3.382s dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d4.0\x2dpart-by\x2dpartnum-1.device
3.382s dev-disk-by\x2ddiskseq-1\x2dpart1.device
3.382s dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d4.0\x2dpart-by\x2duuid-a0a54066\x2d5053\x2d4aaf\x2db…
3.382s dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d4.0\x2dpart-by\x2dpartlabel-virtualbox.device
3.382s dev-disk-by\x2dpartlabel-virtualbox.device
3.382s dev-disk-by\x2duuid-a0a54066\x2d5053\x2d4aaf\x2db0de\x2d0cf5fefeb260.device
3.373s dev-disk-by\x2dpath-pci\x2d0000:21:00.0\x2dnvme\x2d1\x2dpart-by\x2dlabel-projects.device
3.373s dev-disk-by\x2did-nvme\x2dSamsung_SSD_990_PRO_2TB_S6Z2NJ0W505466B\x2dpart1.device
3.373s dev-disk-by\x2dpath-pci\x2d0000:21:00.0\x2dnvme\x2d1\x2dpart-by\x2duuid-04f0ac14\x2d1556\x2d4666\x2d9f…
3.373s dev-nvme0n1p1.device
3.373s dev-disk-by\x2dpartlabel-pro\x2dsrc\x2dtools.device
3.373s dev-disk-by\x2ddiskseq-3\x2dpart1.device
3.373s dev-disk-by\x2did-nvme\x2deui.0025384531408347\x2dpart1.device
3.373s sys-devices-pci0000:20-0000:20:01.1-0000:21:00.0-nvme-nvme0-nvme0n1-nvme0n1p1.device
3.373s dev-disk-by\x2dpath-pci\x2d0000:21:00.0\x2dnvme\x2d1\x2dpart1.device
3.373s dev-disk-by\x2duuid-04f0ac14\x2d1556\x2d4666\x2d9f12\x2d617ba471502c.device
3.373s dev-disk-by\x2dpath-pci\x2d0000:21:00.0\x2dnvme\x2d1\x2dpart-by\x2dpartnum-1.device
3.373s dev-disk-by\x2dpartuuid-14acf004\x2d5615\x2d6646\x2d9f12\x2d617ba471502c.device
3.373s dev-disk-by\x2dpath-pci\x2d0000:21:00.0\x2dnvme\x2d1\x2dpart-by\x2dpartuuid-14acf004\x2d5615\x2d6646\x…
3.373s dev-disk-by\x2did-nvme\x2dSamsung_SSD_990_PRO_2TB_S6Z2NJ0W505466B_1\x2dpart1.device
3.373s dev-disk-by\x2dlabel-projects.device
3.373s dev-disk-by\x2dpath-pci\x2d0000:21:00.0\x2dnvme\x2d1\x2dpart-by\x2dpartlabel-pro\x2dsrc\x2dtools.device
3.370s dev-disk-by\x2did-nvme\x2dSamsung_SSD_990_PRO_2TB_S6Z2NJ0W505466B.device
3.370s dev-disk-by\x2ddiskseq-3.device
3.370s dev-disk-by\x2did-nvme\x2deui.0025384531408347.device
3.370s sys-devices-pci0000:20-0000:20:01.1-0000:21:00.0-nvme-nvme0-nvme0n1.device
3.370s dev-disk-by\x2dpath-pci\x2d0000:21:00.0\x2dnvme\x2d1.device
3.370s dev-disk-by\x2did-nvme\x2dSamsung_SSD_990_PRO_2TB_S6Z2NJ0W505466B_1.device
3.370s dev-nvme0n1.device
3.367s dev-disk-by\x2did-nvme\x2dSamsung_SSD_990_PRO_2TB_S6Z2NJ0W505466B\x2dpart2.device
3.367s dev-disk-by\x2ddiskseq-3\x2dpart2.device
3.367s dev-disk-by\x2duuid-db9d1c46\x2d1b32\x2d4942\x2d9bfa\x2d69d76ee44be2.device
3.367s dev-disk-by\x2dpath-pci\x2d0000:21:00.0\x2dnvme\x2d1\x2dpart-by\x2dpartnum-2.device
3.367s dev-disk-by\x2dpath-pci\x2d0000:21:00.0\x2dnvme\x2d1\x2dpart-by\x2duuid-db9d1c46\x2d1b32\x2d4942\x2d9b…
3.367s dev-disk-by\x2dlabel-swap\x2don\x2dproject.device
3.367s dev-disk-by\x2dpath-pci\x2d0000:21:00.0\x2dnvme\x2d1\x2dpart2.device
3.367s dev-disk-by\x2dpartuuid-82c0829b\x2d2085\x2d482f\x2d9f3d\x2d9c7ef75325ac.device
3.367s dev-disk-by\x2dpath-pci\x2d0000:21:00.0\x2dnvme\x2d1\x2dpart-by\x2dpartuuid-82c0829b\x2d2085\x2d482f\x…
3.367s dev-disk-by\x2dpath-pci\x2d0000:21:00.0\x2dnvme\x2d1\x2dpart-by\x2dlabel-swap\x2don\x2dproject.device
3.367s dev-disk-by\x2did-nvme\x2deui.0025384531408347\x2dpart2.device
3.367s dev-disk-by\x2did-nvme\x2dSamsung_SSD_990_PRO_2TB_S6Z2NJ0W505466B_1\x2dpart2.device
3.367s dev-nvme0n1p2.device
3.367s sys-devices-pci0000:20-0000:20:01.1-0000:21:00.0-nvme-nvme0-nvme0n1-nvme0n1p2.device
3.357s dev-nvme1n1.device
3.357s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1.device
3.357s sys-devices-pci0000:20-0000:20:01.2-0000:22:00.0-nvme-nvme1-nvme1n1.device
3.357s dev-disk-by\x2did-nvme\x2dSAMSUNG_MZVL21T0HCLR\x2d00BL7_S64PNX0T519991.device
3.357s dev-disk-by\x2did-nvme\x2dSAMSUNG_MZVL21T0HCLR\x2d00BL7_S64PNX0T519991_1.device
3.357s dev-disk-by\x2ddiskseq-4.device
3.357s dev-disk-by\x2did-nvme\x2deui.002538b521b93bcc.device
3.352s dev-disk-by\x2did-nvme\x2deui.002538b521b93bcc\x2dpart1.device
3.352s sys-devices-pci0000:20-0000:20:01.2-0000:22:00.0-nvme-nvme1-nvme1n1-nvme1n1p1.device
3.352s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart1.device
3.352s dev-disk-by\x2did-nvme\x2dSAMSUNG_MZVL21T0HCLR\x2d00BL7_S64PNX0T519991_1\x2dpart1.device
3.352s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart-by\x2duuid-AD24\x2dB748.device
3.352s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart-by\x2dpartnum-1.device
3.352s dev-nvme1n1p1.device
3.352s dev-disk-by\x2did-nvme\x2dSAMSUNG_MZVL21T0HCLR\x2d00BL7_S64PNX0T519991\x2dpart1.device
3.352s dev-disk-by\x2duuid-AD24\x2dB748.device
3.352s dev-disk-by\x2dpartuuid-519fa53d\x2dc862\x2d4efb\x2d948b\x2deee1a5d9c584.device
3.352s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart-by\x2dpartuuid-519fa53d\x2dc862\x2d4efb\x…
3.352s dev-disk-by\x2ddiskseq-4\x2dpart1.device
3.351s dev-disk-by\x2duuid-3cd93eae\x2d1d0d\x2d4ce6\x2da7e6\x2d7df665e8cede.device
3.351s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart3.device
3.351s dev-disk-by\x2did-nvme\x2dSAMSUNG_MZVL21T0HCLR\x2d00BL7_S64PNX0T519991\x2dpart3.device
3.351s dev-disk-by\x2dpartuuid-392cd4b5\x2d074a\x2d4b93\x2dba74\x2d42eb0bb5048c.device
3.351s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart-by\x2duuid-3cd93eae\x2d1d0d\x2d4ce6\x2da7…
3.351s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart-by\x2dpartnum-3.device
3.351s dev-disk-by\x2dlabel-swap.device
3.351s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart-by\x2dpartuuid-392cd4b5\x2d074a\x2d4b93\x…
3.351s dev-nvme1n1p3.device
3.351s dev-disk-by\x2did-nvme\x2deui.002538b521b93bcc\x2dpart3.device
3.351s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart-by\x2dlabel-swap.device
3.351s sys-devices-pci0000:20-0000:20:01.2-0000:22:00.0-nvme-nvme1-nvme1n1-nvme1n1p3.device
3.351s dev-disk-by\x2ddiskseq-4\x2dpart3.device
3.351s dev-disk-by\x2did-nvme\x2dSAMSUNG_MZVL21T0HCLR\x2d00BL7_S64PNX0T519991_1\x2dpart3.device
3.345s dev-disk-by\x2duuid-cafce1fc\x2da404\x2d48b9\x2db7e8\x2d00ec59a4e2c0.device
3.345s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart2.device
3.345s sys-devices-pci0000:20-0000:20:01.2-0000:22:00.0-nvme-nvme1-nvme1n1-nvme1n1p2.device
3.345s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart-by\x2dpartuuid-de0c9749\x2dc372\x2d43f4\x…
3.345s dev-nvme1n1p2.device
3.345s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart-by\x2duuid-cafce1fc\x2da404\x2d48b9\x2db7…
3.345s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart-by\x2dpartlabel-root.device
3.345s dev-disk-by\x2did-nvme\x2dSAMSUNG_MZVL21T0HCLR\x2d00BL7_S64PNX0T519991\x2dpart2.device
3.345s dev-disk-by\x2dpartuuid-de0c9749\x2dc372\x2d43f4\x2da952\x2df4d3e0d6eaf4.device
3.345s dev-disk-by\x2dpath-pci\x2d0000:22:00.0\x2dnvme\x2d1\x2dpart-by\x2dpartnum-2.device
3.345s dev-disk-by\x2dpartlabel-root.device
3.345s dev-disk-by\x2did-nvme\x2deui.002538b521b93bcc\x2dpart2.device
3.345s dev-disk-by\x2ddiskseq-4\x2dpart2.device
3.345s dev-disk-by\x2did-nvme\x2dSAMSUNG_MZVL21T0HCLR\x2d00BL7_S64PNX0T519991_1\x2dpart2.device
2.358s plymouth-quit-wait.service
2.358s plymouth-quit.service
796ms vmware-networks.service
754ms NetworkManager.service
688ms systemd-binfmt.service
646ms systemd-resolved.service
636ms systemd-timesyncd.service
615ms docker.service
555ms initrd-switch-root.service
359ms a-private.mount
331ms lvm2-monitor.service
263ms plymouth-switch-root.service
245ms user@1000.service
125ms systemd-udev-trigger.service
113ms containerd.service
96ms udisks2.service
79ms a-virtualbox.mount
76ms upower.service
69ms libvirtd.service
68ms power-profiles-daemon.service
67ms plymouth-start.service
65ms systemd-journal-flush.service
57ms cups.service
57ms systemd-tmpfiles-setup-dev-early.service
56ms modprobe@dm_mod.service
56ms modprobe@loop.service
54ms systemd-tmpfiles-setup.service
49ms a-projects.mount
48ms initrd-cleanup.service
48ms proc-sys-fs-binfmt_misc.mount
48ms sshd.service
44ms initrd-parse-etc.service
43ms systemd-remount-fs.service
42ms user-runtime-dir@1000.service
41ms systemd-vconsole-setup.service
39ms plymouth-read-write.service
37ms docker.socket
36ms systemd-userdbd.service
34ms boot-efi.mount
34ms systemd-udevd.service
30ms ModemManager.service
30ms polkit.service
24ms systemd-journald.service
24ms systemd-hostnamed.service
22ms systemd-random-seed.service
20ms systemd-logind.service
19ms dbus-broker.service
17ms systemd-modules-load.service
17ms nordvpnd.socket
14ms systemd-fsck-root.service
14ms dev-disk-by\x2ddiskseq-4\x2dpart3.swap
9ms systemd-tmpfiles-setup-dev.service
9ms systemd-fsck@dev-disk-by\x2duuid-AD24\x2dB748.service
9ms dev-hugepages.mount
8ms dev-mqueue.mount
8ms sys-kernel-debug.mount
8ms sys-kernel-tracing.mount
7ms systemd-machined.service
7ms kmod-static-nodes.service
6ms rtkit-daemon.service
6ms modprobe@drm.service
6ms avahi-daemon.service
6ms systemd-sysctl.service
5ms systemd-udev-load-credentials.service
5ms systemd-update-utmp.service
5ms initrd-udevadm-cleanup-db.service
4ms alsa-restore.service
3ms systemd-user-sessions.service
3ms modprobe@fuse.service
3ms systemd-hibernate-resume.service
3ms tmp.mount
3ms modprobe@configfs.service
2ms sys-fs-fuse-connections.mount
2ms sys-kernel-config.mount
Zesko
6 October 2024 09:33
12
If you’re a fan of boot speed, in my experience, booster
starts about twice as fast as mkinitcpio
or dracut
in my specific use case without Nvidia.
Because booster
does not add many kernel modules. However, it requires some tech knowledge to manage what I need to add some necessary modules manually.
Unlike mkinitcpio
and dracut
, which may automatically add a lot of unnecessary, bloated modules that I do not need. booster
gives me more control and has some limitations.
If you’re a beginner, it’s better to stick with the default mkinitcpio
and accept your unexpected boot time.
1 Like
Teo
6 October 2024 10:03
13
First, your boot time seems pretty normal and fine.
Second, as said, there is no universe in which windows BOOTED faster than that. Fast startup is enabled there by default, so it RESTORED maybe a second or two faster.
Third, you are in experienced user territory. And you do not sound that way, sorry. The possibility to break the system if you start messing with it without knowing what you’re doing is big.
Fourth, now ontopic: the ways to reduce boot time are disabling some services (ideas above, but yours are fine), blacklisting some modules (but i doubt how much will that bring), and modifying the hooks in the init. That can save several seconds.
https://wiki.archlinux.org/title/Mkinitcpio
1 Like
I think I was using Hybrid Hibernation
omano
6 October 2024 16:41
16
You can’t do anything really about the firmware boot time, it is your motherboard initializing (you could enable FastBoot but that would break things like keyboard before you reach the system so don’t do that), the loader time you can simply modify it, in the GRUB config file.
Open /etc/default/grub
and edit the line for GRUB_TIMEOUT
like this GRUB_TIMEOUT=1
, save the file, then rebuild GRUB config with command sudo update-grub
and then reboot.
omano:
GRUB_TIMEOUT=1
Can I put 0? Does it work?
Yes, you can.
However, this may limit your ability to quickly change boot options, if needed; for example, selecting a different kernel to boot.
The reason the timeout exists is to give you that opportunity, before the computer continues to boot.
Usually, touching the Space bar before the timeout expires would allow this.
Three seconds seems a fair compromise; it’s a tradeoff, really.
You might also consider changing the timeout that allows you to boot to the BIOS; it’s typically set at around 5 seconds (by default) but it varies according to hardware used. The timeout would be found in your BIOS (somewhere).
I wouldn’t recommend setting this too short, though.
Teo
6 October 2024 19:10
19
You can hide the menu completely if you will. In 99% of the topics however newbies scream they cannot access the menu then, obviously incapable of pumping shift or esc fast enough.
My general advice is, as a newbie, keep the customizations to a minimum. There is a reason for the default settings to be as they are. But i somehow have the feeling you want “learning by (un)doing”.
1 Like
omano
6 October 2024 20:42
20
It will not work if you have other OS detected by GRUB it will fallback to 10 seconds. anyway with 0 it is very hard/impossible then to get the grub menu if needed, 1 second lets you some time (1 second) to push the key to show the GRUB menu.