I’m trying to help my son to get Docker running on his RPI4 8 GB. Installed Docker and docker-compose but trying to run
sudo systemctl enable --now docker
gives
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
Running only dockerd:
INFO[2023-01-19T18:49:40.785471042+01:00] Starting up
dockerd needs to be started with root privileges. To run dockerd in rootless mode as an unprivileged user, see https://docs.docker.com/go/rootless/
[m@torch system]$ sudo dockerd
INFO[2023-01-19T18:49:47.045064464+01:00] Starting up
INFO[2023-01-19T18:49:47.047873570+01:00] libcontainerd: started new containerd process pid=2728
INFO[2023-01-19T18:49:47.048033366+01:00] parsed scheme: "unix" module=grpc
INFO[2023-01-19T18:49:47.048072310+01:00] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2023-01-19T18:49:47.048142569+01:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc
INFO[2023-01-19T18:49:47.048192366+01:00] ClientConn switching balancer to "pick_first" module=grpc
WARN[0000] containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https://github.com/containerd/containerd/blob/main/docs/PLUGINS.md#version-header
INFO[2023-01-19T18:49:47.096615790+01:00] starting containerd revision=9ba4b250366a5ddde94bb7c9d1def331423aa323.m version=v1.6.14
INFO[2023-01-19T18:49:47.136525824+01:00] loading plugin "io.containerd.content.v1.content"... type=io.containerd.content.v1
INFO[2023-01-19T18:49:47.136726860+01:00] loading plugin "io.containerd.snapshotter.v1.aufs"... type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.141744573+01:00] skip loading plugin "io.containerd.snapshotter.v1.aufs"... error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI\\n\"): skip plugin" type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.141985221+01:00] loading plugin "io.containerd.snapshotter.v1.btrfs"... type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.142488701+01:00] loading plugin "io.containerd.snapshotter.v1.devmapper"... type=io.containerd.snapshotter.v1
WARN[2023-01-19T18:49:47.142558682+01:00] failed to load plugin io.containerd.snapshotter.v1.devmapper error="devmapper not configured"
INFO[2023-01-19T18:49:47.142601460+01:00] loading plugin "io.containerd.snapshotter.v1.native"... type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.142682756+01:00] loading plugin "io.containerd.snapshotter.v1.overlayfs"... type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.143090737+01:00] loading plugin "io.containerd.snapshotter.v1.zfs"... type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.143495607+01:00] skip loading plugin "io.containerd.snapshotter.v1.zfs"... error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.143569958+01:00] loading plugin "io.containerd.metadata.v1.bolt"... type=io.containerd.metadata.v1
WARN[2023-01-19T18:49:47.143647828+01:00] could not use snapshotter devmapper in metadata plugin error="devmapper not configured"
INFO[2023-01-19T18:49:47.143694977+01:00] metadata content store policy set policy=shared
INFO[2023-01-19T18:49:47.143977846+01:00] loading plugin "io.containerd.differ.v1.walking"... type=io.containerd.differ.v1
INFO[2023-01-19T18:49:47.144049476+01:00] loading plugin "io.containerd.event.v1.exchange"... type=io.containerd.event.v1
INFO[2023-01-19T18:49:47.144097568+01:00] loading plugin "io.containerd.gc.v1.scheduler"... type=io.containerd.gc.v1
INFO[2023-01-19T18:49:47.144191105+01:00] loading plugin "io.containerd.service.v1.introspection-service"... type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145009159+01:00] loading plugin "io.containerd.service.v1.containers-service"... type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145115048+01:00] loading plugin "io.containerd.service.v1.content-service"... type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145170974+01:00] loading plugin "io.containerd.service.v1.diff-service"... type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145242455+01:00] loading plugin "io.containerd.service.v1.images-service"... type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145293085+01:00] loading plugin "io.containerd.service.v1.leases-service"... type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145348621+01:00] loading plugin "io.containerd.service.v1.namespaces-service"... type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145395955+01:00] loading plugin "io.containerd.service.v1.snapshots-service"... type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145443232+01:00] loading plugin "io.containerd.runtime.v1.linux"... type=io.containerd.runtime.v1
INFO[2023-01-19T18:49:47.145677325+01:00] loading plugin "io.containerd.runtime.v2.task"... type=io.containerd.runtime.v2
INFO[2023-01-19T18:49:47.145858158+01:00] loading plugin "io.containerd.monitor.v1.cgroups"... type=io.containerd.monitor.v1
INFO[2023-01-19T18:49:47.148006190+01:00] loading plugin "io.containerd.service.v1.tasks-service"... type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.148170727+01:00] loading plugin "io.containerd.grpc.v1.introspection"... type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148231134+01:00] loading plugin "io.containerd.internal.v1.restart"... type=io.containerd.internal.v1
INFO[2023-01-19T18:49:47.148400597+01:00] loading plugin "io.containerd.grpc.v1.containers"... type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148454004+01:00] loading plugin "io.containerd.grpc.v1.content"... type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148511393+01:00] loading plugin "io.containerd.grpc.v1.diff"... type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148562967+01:00] loading plugin "io.containerd.grpc.v1.events"... type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148615226+01:00] loading plugin "io.containerd.grpc.v1.healthcheck"... type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148667837+01:00] loading plugin "io.containerd.grpc.v1.images"... type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148732411+01:00] loading plugin "io.containerd.grpc.v1.leases"... type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148787485+01:00] loading plugin "io.containerd.grpc.v1.namespaces"... type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148847207+01:00] loading plugin "io.containerd.internal.v1.opt"... type=io.containerd.internal.v1
INFO[2023-01-19T18:49:47.148993855+01:00] loading plugin "io.containerd.grpc.v1.snapshots"... type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.149045892+01:00] loading plugin "io.containerd.grpc.v1.tasks"... type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.149094929+01:00] loading plugin "io.containerd.grpc.v1.version"... type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.149140559+01:00] loading plugin "io.containerd.tracing.processor.v1.otlp"... type=io.containerd.tracing.processor.v1
INFO[2023-01-19T18:49:47.149198836+01:00] skip loading plugin "io.containerd.tracing.processor.v1.otlp"... error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
INFO[2023-01-19T18:49:47.149255318+01:00] loading plugin "io.containerd.internal.v1.tracing"... type=io.containerd.internal.v1
ERRO[2023-01-19T18:49:47.149316873+01:00] failed to initialize a tracing processor "otlp" error="no OpenTelemetry endpoint: skip plugin"
INFO[2023-01-19T18:49:47.149980538+01:00] serving... address=/var/run/docker/containerd/containerd-debug.sock
INFO[2023-01-19T18:49:47.150171390+01:00] serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2023-01-19T18:49:47.150331538+01:00] serving... address=/var/run/docker/containerd/containerd.sock
INFO[2023-01-19T18:49:47.150447556+01:00] containerd successfully booted in 0.055577s
INFO[2023-01-19T18:49:47.163215031+01:00] parsed scheme: "unix" module=grpc
INFO[2023-01-19T18:49:47.163301846+01:00] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2023-01-19T18:49:47.163381272+01:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc
INFO[2023-01-19T18:49:47.163428735+01:00] ClientConn switching balancer to "pick_first" module=grpc
INFO[2023-01-19T18:49:47.168828650+01:00] parsed scheme: "unix" module=grpc
INFO[2023-01-19T18:49:47.168921724+01:00] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2023-01-19T18:49:47.168997427+01:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc
INFO[2023-01-19T18:49:47.169043020+01:00] ClientConn switching balancer to "pick_first" module=grpc
INFO[2023-01-19T18:49:47.171586885+01:00] [graphdriver] using prior storage driver: btrfs
WARN[2023-01-19T18:49:47.191483087+01:00] Unable to find memory controller
INFO[2023-01-19T18:49:47.192027475+01:00] Loading containers: start.
WARN[2023-01-19T18:49:47.221700918+01:00] Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI
modprobe: WARNING: Module br_netfilter not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI
, error: exit status 1
WARN[2023-01-19T18:49:47.228926367+01:00] Running iptables --wait -t nat -L -n failed with message: `modprobe: FATAL: Module ip_tables not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI
iptables v1.8.8 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.`, error: exit status 3
INFO[2023-01-19T18:49:47.441576176+01:00] stopping event stream following graceful shutdown error="<nil>" module=libcontainerd namespace=moby
INFO[2023-01-19T18:49:47.443337098+01:00] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
INFO[2023-01-19T18:49:47.443500913+01:00] stopping healthcheck following graceful shutdown module=libcontainerd
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: modprobe: FATAL: Module ip_tables not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI
iptables v1.8.8 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
So it seems to boil down to missing iptables which is kinda strange, aren’t they supposed to be in the kernel? The kernel seems to be a version behind but running
sudo journalctl -xeu docker.service
A start job for unit docker.service has begun execution.
The job identifier is 953.
Jan 19 22:19:33 machinename dockerd[444]: time="2023-01-19T22:19:33.394775708+01:00" level=info msg="Starting up"
Jan 19 22:19:33 machinename dockerd[444]: failed to load listeners: no sockets found via socket activation: make sure the service was started by systemd
Jan 19 22:19:33 machinename systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Subject: Unit process exited
Defined-By: systemd
Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
An ExecStart= process belonging to unit docker.service has exited.
The process' exit code is 'exited' and its exit status is 1.
Jan 19 22:19:33 machinename systemd[1]: docker.service: Failed with result 'exit-code'.
Subject: Unit failed
Defined-By: systemd
Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
The unit docker.service has entered the 'failed' state with result 'exit-code'.
Jan 19 22:19:33 machinename systemd[1]: Failed to start Docker Application Container Engine.
Subject: A start job for unit docker.service has failed
Defined-By: systemd
Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
A start job for unit docker.service has finished with a failure.
The job identifier is 953 and the job result is failed.
Jan 19 22:19:33 machinename systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Subject: Automatic restarting of a unit has been scheduled
Defined-By: systemd
Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Automatic restarting of the unit docker.service has been scheduled, as the result for
the configured Restart= setting for the unit.
Jan 19 22:19:33 machinename systemd[1]: Stopped Docker Application Container Engine.
Subject: A stop job for unit docker.service has finished
Defined-By: systemd
Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
A stop job for unit docker.service has finished.
The job identifier is 1038 and the job result is done.
Jan 19 22:19:33 machinename systemd[1]: docker.service: Start request repeated too quickly.
Jan 19 22:19:33 machinename systemd[1]: docker.service: Failed with result 'exit-code'.
Subject: Unit failed
Defined-By: systemd
Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
The unit docker.service has entered the 'failed' state with result 'exit-code'.
Jan 19 22:19:33 machinename systemd[1]: Failed to start Docker Application Container Engine.
Subject: A start job for unit docker.service has failed
Defined-By: systemd
Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
A start job for unit docker.service has finished with a failure.
The job identifier is 1038 and the job result is failed.
Did you reboot after the system update? This looks like an old Kernel. Reboot! Than start docker via systemd. Stop running dockerd directly. Do not do this.
If docker started with systemd after the reboot is not working, check the journal for errors. Do not run dockerd manually!
There’s something very strange about this. Running
pacman -S linux-rpi4
results in
warning: linux-rpi4-5.15.84-1 is up to date -- reinstalling
resolving dependencies...
looking for conflicting packages...
Packages (1) linux-rpi4-5.15.84-1
but
uname -a
results in
Linux torch 5.15.74-2-MANJARO-ARM-RPI #1 SMP PREEMPT Thu Oct 20 16:43:17 UTC 2022 aarch64 GNU/Linux
And yes, the system is rebooted. The reason for running dockerd --debug was to get a more detailed error info and it’s complaining about iptables.
Output from sudo systemctl enable --now docker.socket:
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /usr/lib/systemd/system/docker.socket.
After that I tried: sudo systemctl enable --now docker
and got the following reponse (again):
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.