Problems running Docker on RPI4

I’m trying to help my son to get Docker running on his RPI4 8 GB. Installed Docker and docker-compose but trying to run

sudo systemctl enable --now docker

gives

Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

Running only dockerd:

INFO[2023-01-19T18:49:40.785471042+01:00] Starting up                                  
dockerd needs to be started with root privileges. To run dockerd in rootless mode as an unprivileged user, see https://docs.docker.com/go/rootless/
[m@torch system]$ sudo dockerd
INFO[2023-01-19T18:49:47.045064464+01:00] Starting up                                  
INFO[2023-01-19T18:49:47.047873570+01:00] libcontainerd: started new containerd process  pid=2728
INFO[2023-01-19T18:49:47.048033366+01:00] parsed scheme: "unix"                         module=grpc
INFO[2023-01-19T18:49:47.048072310+01:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2023-01-19T18:49:47.048142569+01:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2023-01-19T18:49:47.048192366+01:00] ClientConn switching balancer to "pick_first"  module=grpc
WARN[0000] containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https://github.com/containerd/containerd/blob/main/docs/PLUGINS.md#version-header 
INFO[2023-01-19T18:49:47.096615790+01:00] starting containerd                           revision=9ba4b250366a5ddde94bb7c9d1def331423aa323.m version=v1.6.14
INFO[2023-01-19T18:49:47.136525824+01:00] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2023-01-19T18:49:47.136726860+01:00] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.141744573+01:00] skip loading plugin "io.containerd.snapshotter.v1.aufs"...  error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI\\n\"): skip plugin" type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.141985221+01:00] loading plugin "io.containerd.snapshotter.v1.btrfs"...  type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.142488701+01:00] loading plugin "io.containerd.snapshotter.v1.devmapper"...  type=io.containerd.snapshotter.v1
WARN[2023-01-19T18:49:47.142558682+01:00] failed to load plugin io.containerd.snapshotter.v1.devmapper  error="devmapper not configured"
INFO[2023-01-19T18:49:47.142601460+01:00] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.142682756+01:00] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.143090737+01:00] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.143495607+01:00] skip loading plugin "io.containerd.snapshotter.v1.zfs"...  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2023-01-19T18:49:47.143569958+01:00] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2023-01-19T18:49:47.143647828+01:00] could not use snapshotter devmapper in metadata plugin  error="devmapper not configured"
INFO[2023-01-19T18:49:47.143694977+01:00] metadata content store policy set             policy=shared
INFO[2023-01-19T18:49:47.143977846+01:00] loading plugin "io.containerd.differ.v1.walking"...  type=io.containerd.differ.v1
INFO[2023-01-19T18:49:47.144049476+01:00] loading plugin "io.containerd.event.v1.exchange"...  type=io.containerd.event.v1
INFO[2023-01-19T18:49:47.144097568+01:00] loading plugin "io.containerd.gc.v1.scheduler"...  type=io.containerd.gc.v1
INFO[2023-01-19T18:49:47.144191105+01:00] loading plugin "io.containerd.service.v1.introspection-service"...  type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145009159+01:00] loading plugin "io.containerd.service.v1.containers-service"...  type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145115048+01:00] loading plugin "io.containerd.service.v1.content-service"...  type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145170974+01:00] loading plugin "io.containerd.service.v1.diff-service"...  type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145242455+01:00] loading plugin "io.containerd.service.v1.images-service"...  type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145293085+01:00] loading plugin "io.containerd.service.v1.leases-service"...  type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145348621+01:00] loading plugin "io.containerd.service.v1.namespaces-service"...  type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145395955+01:00] loading plugin "io.containerd.service.v1.snapshots-service"...  type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.145443232+01:00] loading plugin "io.containerd.runtime.v1.linux"...  type=io.containerd.runtime.v1
INFO[2023-01-19T18:49:47.145677325+01:00] loading plugin "io.containerd.runtime.v2.task"...  type=io.containerd.runtime.v2
INFO[2023-01-19T18:49:47.145858158+01:00] loading plugin "io.containerd.monitor.v1.cgroups"...  type=io.containerd.monitor.v1
INFO[2023-01-19T18:49:47.148006190+01:00] loading plugin "io.containerd.service.v1.tasks-service"...  type=io.containerd.service.v1
INFO[2023-01-19T18:49:47.148170727+01:00] loading plugin "io.containerd.grpc.v1.introspection"...  type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148231134+01:00] loading plugin "io.containerd.internal.v1.restart"...  type=io.containerd.internal.v1
INFO[2023-01-19T18:49:47.148400597+01:00] loading plugin "io.containerd.grpc.v1.containers"...  type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148454004+01:00] loading plugin "io.containerd.grpc.v1.content"...  type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148511393+01:00] loading plugin "io.containerd.grpc.v1.diff"...  type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148562967+01:00] loading plugin "io.containerd.grpc.v1.events"...  type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148615226+01:00] loading plugin "io.containerd.grpc.v1.healthcheck"...  type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148667837+01:00] loading plugin "io.containerd.grpc.v1.images"...  type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148732411+01:00] loading plugin "io.containerd.grpc.v1.leases"...  type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148787485+01:00] loading plugin "io.containerd.grpc.v1.namespaces"...  type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.148847207+01:00] loading plugin "io.containerd.internal.v1.opt"...  type=io.containerd.internal.v1
INFO[2023-01-19T18:49:47.148993855+01:00] loading plugin "io.containerd.grpc.v1.snapshots"...  type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.149045892+01:00] loading plugin "io.containerd.grpc.v1.tasks"...  type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.149094929+01:00] loading plugin "io.containerd.grpc.v1.version"...  type=io.containerd.grpc.v1
INFO[2023-01-19T18:49:47.149140559+01:00] loading plugin "io.containerd.tracing.processor.v1.otlp"...  type=io.containerd.tracing.processor.v1
INFO[2023-01-19T18:49:47.149198836+01:00] skip loading plugin "io.containerd.tracing.processor.v1.otlp"...  error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
INFO[2023-01-19T18:49:47.149255318+01:00] loading plugin "io.containerd.internal.v1.tracing"...  type=io.containerd.internal.v1
ERRO[2023-01-19T18:49:47.149316873+01:00] failed to initialize a tracing processor "otlp"  error="no OpenTelemetry endpoint: skip plugin"
INFO[2023-01-19T18:49:47.149980538+01:00] serving...                                    address=/var/run/docker/containerd/containerd-debug.sock
INFO[2023-01-19T18:49:47.150171390+01:00] serving...                                    address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2023-01-19T18:49:47.150331538+01:00] serving...                                    address=/var/run/docker/containerd/containerd.sock
INFO[2023-01-19T18:49:47.150447556+01:00] containerd successfully booted in 0.055577s  
INFO[2023-01-19T18:49:47.163215031+01:00] parsed scheme: "unix"                         module=grpc
INFO[2023-01-19T18:49:47.163301846+01:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2023-01-19T18:49:47.163381272+01:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2023-01-19T18:49:47.163428735+01:00] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2023-01-19T18:49:47.168828650+01:00] parsed scheme: "unix"                         module=grpc
INFO[2023-01-19T18:49:47.168921724+01:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2023-01-19T18:49:47.168997427+01:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2023-01-19T18:49:47.169043020+01:00] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2023-01-19T18:49:47.171586885+01:00] [graphdriver] using prior storage driver: btrfs 
WARN[2023-01-19T18:49:47.191483087+01:00] Unable to find memory controller             
INFO[2023-01-19T18:49:47.192027475+01:00] Loading containers: start.                   
WARN[2023-01-19T18:49:47.221700918+01:00] Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI
modprobe: WARNING: Module br_netfilter not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI
, error: exit status 1 
WARN[2023-01-19T18:49:47.228926367+01:00] Running iptables --wait -t nat -L -n failed with message: `modprobe: FATAL: Module ip_tables not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI
iptables v1.8.8 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.`, error: exit status 3 
INFO[2023-01-19T18:49:47.441576176+01:00] stopping event stream following graceful shutdown  error="<nil>" module=libcontainerd namespace=moby
INFO[2023-01-19T18:49:47.443337098+01:00] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
INFO[2023-01-19T18:49:47.443500913+01:00] stopping healthcheck following graceful shutdown  module=libcontainerd
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: modprobe: FATAL: Module ip_tables not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI
iptables v1.8.8 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

So it seems to boil down to missing iptables which is kinda strange, aren’t they supposed to be in the kernel? The kernel seems to be a version behind but running

pamac update

gives

Nothing to do

Any input greatly appreciated.

Did you try running either of those two commands, preferably the last one?

I’m running docker just fine on my Rpi4 with Manjaro ARM.

sudo systemctl status docker.service
× docker.service - Docker Application Container Engine
     Loaded: loaded (/etc/systemd/system/docker.service; enabled; preset: disabled)
     Active: failed (Result: exit-code) since Thu 2023-01-19 22:19:33 CET; 10h ago
TriggeredBy: ○ docker.socket
       Docs: https://docs.docker.com
    Process: 444 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE)
   Main PID: 444 (code=exited, status=1/FAILURE)
        CPU: 160ms

Jan 19 22:19:33 torch systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Jan 19 22:19:33 torch systemd[1]: Stopped Docker Application Container Engine.
Jan 19 22:19:33 torch systemd[1]: docker.service: Start request repeated too quickly.
Jan 19 22:19:33 torch systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 19 22:19:33 torch systemd[1]: Failed to start Docker Application Container Engine.
sudo journalctl -xeu docker.service
  
  A start job for unit docker.service has begun execution.
  
  The job identifier is 953.
Jan 19 22:19:33 machinename dockerd[444]: time="2023-01-19T22:19:33.394775708+01:00" level=info msg="Starting up"
Jan 19 22:19:33 machinename dockerd[444]: failed to load listeners: no sockets found via socket activation: make sure the service was started by systemd
Jan 19 22:19:33 machinename systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
  Subject: Unit process exited
  Defined-By: systemd
  Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
  
  An ExecStart= process belonging to unit docker.service has exited.
  
  The process' exit code is 'exited' and its exit status is 1.
Jan 19 22:19:33 machinename systemd[1]: docker.service: Failed with result 'exit-code'.
  Subject: Unit failed
  Defined-By: systemd
  Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
  
  The unit docker.service has entered the 'failed' state with result 'exit-code'.
Jan 19 22:19:33 machinename systemd[1]: Failed to start Docker Application Container Engine.
  Subject: A start job for unit docker.service has failed
  Defined-By: systemd
  Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
  
  A start job for unit docker.service has finished with a failure.
  
  The job identifier is 953 and the job result is failed.
Jan 19 22:19:33 machinename systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
  Subject: Automatic restarting of a unit has been scheduled
  Defined-By: systemd
  Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
  
  Automatic restarting of the unit docker.service has been scheduled, as the result for
  the configured Restart= setting for the unit.
Jan 19 22:19:33 machinename systemd[1]: Stopped Docker Application Container Engine.
  Subject: A stop job for unit docker.service has finished
  Defined-By: systemd
  Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
  
  A stop job for unit docker.service has finished.
  
  The job identifier is 1038 and the job result is done.
Jan 19 22:19:33 machinename systemd[1]: docker.service: Start request repeated too quickly.
Jan 19 22:19:33 machinename systemd[1]: docker.service: Failed with result 'exit-code'.
  Subject: Unit failed
  Defined-By: systemd
  Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
  
  The unit docker.service has entered the 'failed' state with result 'exit-code'.
Jan 19 22:19:33 machinename systemd[1]: Failed to start Docker Application Container Engine.
  Subject: A start job for unit docker.service has failed
  Defined-By: systemd
  Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
  
  A start job for unit docker.service has finished with a failure.
  
  The job identifier is 1038 and the job result is failed.

Result of dockerd --debug:

sudo dockerd --debug
INFO[2023-01-19T21:54:39.873706171+01:00] Starting up                                  
DEBU[2023-01-19T21:54:39.875104799+01:00] Listener created for HTTP on unix (/var/run/docker.sock) 
DEBU[2023-01-19T21:54:39.875185299+01:00] Containerd not running, starting daemon managed containerd 
INFO[2023-01-19T21:54:39.879004572+01:00] libcontainerd: started new containerd process  pid=485
INFO[2023-01-19T21:54:39.879205943+01:00] parsed scheme: "unix"                         module=grpc
INFO[2023-01-19T21:54:39.879245165+01:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2023-01-19T21:54:39.879325091+01:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2023-01-19T21:54:39.879360220+01:00] ClientConn switching balancer to "pick_first"  module=grpc
WARN[0000] containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https://github.com/containerd/containerd/blob/main/docs/PLUGINS.md#version-header 
INFO[2023-01-19T21:54:40.081255259+01:00] starting containerd                           revision=9ba4b250366a5ddde94bb7c9d1def331423aa323.m version=v1.6.14
INFO[2023-01-19T21:54:40.126049432+01:00] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2023-01-19T21:54:40.126379099+01:00] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
INFO[2023-01-19T21:54:40.131476297+01:00] skip loading plugin "io.containerd.snapshotter.v1.aufs"...  error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI\\n\"): skip plugin" type=io.containerd.snapshotter.v1
INFO[2023-01-19T21:54:40.131784111+01:00] loading plugin "io.containerd.snapshotter.v1.btrfs"...  type=io.containerd.snapshotter.v1
INFO[2023-01-19T21:54:40.132344796+01:00] loading plugin "io.containerd.snapshotter.v1.devmapper"...  type=io.containerd.snapshotter.v1
WARN[2023-01-19T21:54:40.132418444+01:00] failed to load plugin io.containerd.snapshotter.v1.devmapper  error="devmapper not configured"
INFO[2023-01-19T21:54:40.132486648+01:00] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2023-01-19T21:54:40.132615573+01:00] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2023-01-19T21:54:40.133039906+01:00] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2023-01-19T21:54:40.133435276+01:00] skip loading plugin "io.containerd.snapshotter.v1.zfs"...  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2023-01-19T21:54:40.133509813+01:00] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2023-01-19T21:54:40.133642443+01:00] could not use snapshotter devmapper in metadata plugin  error="devmapper not configured"
INFO[2023-01-19T21:54:40.133705091+01:00] metadata content store policy set             policy=shared
INFO[2023-01-19T21:54:40.140079658+01:00] loading plugin "io.containerd.differ.v1.walking"...  type=io.containerd.differ.v1
INFO[2023-01-19T21:54:40.140215065+01:00] loading plugin "io.containerd.event.v1.exchange"...  type=io.containerd.event.v1
INFO[2023-01-19T21:54:40.140321713+01:00] loading plugin "io.containerd.gc.v1.scheduler"...  type=io.containerd.gc.v1
INFO[2023-01-19T21:54:40.140441565+01:00] loading plugin "io.containerd.service.v1.introspection-service"...  type=io.containerd.service.v1
INFO[2023-01-19T21:54:40.142288674+01:00] loading plugin "io.containerd.service.v1.containers-service"...  type=io.containerd.service.v1
INFO[2023-01-19T21:54:40.142499600+01:00] loading plugin "io.containerd.service.v1.content-service"...  type=io.containerd.service.v1
INFO[2023-01-19T21:54:40.142616377+01:00] loading plugin "io.containerd.service.v1.diff-service"...  type=io.containerd.service.v1
INFO[2023-01-19T21:54:40.142720303+01:00] loading plugin "io.containerd.service.v1.images-service"...  type=io.containerd.service.v1
INFO[2023-01-19T21:54:40.142949247+01:00] loading plugin "io.containerd.service.v1.leases-service"...  type=io.containerd.service.v1
INFO[2023-01-19T21:54:40.143053284+01:00] loading plugin "io.containerd.service.v1.namespaces-service"...  type=io.containerd.service.v1
INFO[2023-01-19T21:54:40.143151284+01:00] loading plugin "io.containerd.service.v1.snapshots-service"...  type=io.containerd.service.v1
INFO[2023-01-19T21:54:40.143246728+01:00] loading plugin "io.containerd.runtime.v1.linux"...  type=io.containerd.runtime.v1
INFO[2023-01-19T21:54:40.144284061+01:00] loading plugin "io.containerd.runtime.v2.task"...  type=io.containerd.runtime.v2
INFO[2023-01-19T21:54:40.144828671+01:00] loading plugin "io.containerd.monitor.v1.cgroups"...  type=io.containerd.monitor.v1
INFO[2023-01-19T21:54:40.145708781+01:00] loading plugin "io.containerd.service.v1.tasks-service"...  type=io.containerd.service.v1
DEBU[2023-01-19T21:54:40.145806652+01:00] No RDT config file specified, RDT not configured 
INFO[2023-01-19T21:54:40.145861911+01:00] loading plugin "io.containerd.grpc.v1.introspection"...  type=io.containerd.grpc.v1
INFO[2023-01-19T21:54:40.145923096+01:00] loading plugin "io.containerd.internal.v1.restart"...  type=io.containerd.internal.v1
INFO[2023-01-19T21:54:40.146104133+01:00] loading plugin "io.containerd.grpc.v1.containers"...  type=io.containerd.grpc.v1
INFO[2023-01-19T21:54:40.146163466+01:00] loading plugin "io.containerd.grpc.v1.content"...  type=io.containerd.grpc.v1
INFO[2023-01-19T21:54:40.146241355+01:00] loading plugin "io.containerd.grpc.v1.diff"...  type=io.containerd.grpc.v1
INFO[2023-01-19T21:54:40.146318373+01:00] loading plugin "io.containerd.grpc.v1.events"...  type=io.containerd.grpc.v1
INFO[2023-01-19T21:54:40.146377392+01:00] loading plugin "io.containerd.grpc.v1.healthcheck"...  type=io.containerd.grpc.v1
INFO[2023-01-19T21:54:40.146433058+01:00] loading plugin "io.containerd.grpc.v1.images"...  type=io.containerd.grpc.v1
INFO[2023-01-19T21:54:40.146489706+01:00] loading plugin "io.containerd.grpc.v1.leases"...  type=io.containerd.grpc.v1
INFO[2023-01-19T21:54:40.146543540+01:00] loading plugin "io.containerd.grpc.v1.namespaces"...  type=io.containerd.grpc.v1
INFO[2023-01-19T21:54:40.146628651+01:00] loading plugin "io.containerd.internal.v1.opt"...  type=io.containerd.internal.v1
INFO[2023-01-19T21:54:40.146990632+01:00] loading plugin "io.containerd.grpc.v1.snapshots"...  type=io.containerd.grpc.v1
INFO[2023-01-19T21:54:40.147068132+01:00] loading plugin "io.containerd.grpc.v1.tasks"...  type=io.containerd.grpc.v1
INFO[2023-01-19T21:54:40.147128965+01:00] loading plugin "io.containerd.grpc.v1.version"...  type=io.containerd.grpc.v1
INFO[2023-01-19T21:54:40.147181372+01:00] loading plugin "io.containerd.tracing.processor.v1.otlp"...  type=io.containerd.tracing.processor.v1
INFO[2023-01-19T21:54:40.147256687+01:00] skip loading plugin "io.containerd.tracing.processor.v1.otlp"...  error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
INFO[2023-01-19T21:54:40.147312354+01:00] loading plugin "io.containerd.internal.v1.tracing"...  type=io.containerd.internal.v1
ERRO[2023-01-19T21:54:40.147388094+01:00] failed to initialize a tracing processor "otlp"  error="no OpenTelemetry endpoint: skip plugin"
INFO[2023-01-19T21:54:40.148108408+01:00] serving...                                    address=/var/run/docker/containerd/containerd-debug.sock
INFO[2023-01-19T21:54:40.148330612+01:00] serving...                                    address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2023-01-19T21:54:40.148546649+01:00] serving...                                    address=/var/run/docker/containerd/containerd.sock
DEBU[2023-01-19T21:54:40.148609908+01:00] sd notification                               error="<nil>" notified=false state="READY=1"
INFO[2023-01-19T21:54:40.148679537+01:00] containerd successfully booted in 0.069307s  
DEBU[2023-01-19T21:54:40.157945990+01:00] Created containerd monitoring client          address=/var/run/docker/containerd/containerd.sock
DEBU[2023-01-19T21:54:40.162718800+01:00] Started daemon managed containerd            
DEBU[2023-01-19T21:54:40.166007889+01:00] Golang's threads limit set to 53820          
INFO[2023-01-19T21:54:40.167523813+01:00] parsed scheme: "unix"                         module=grpc
INFO[2023-01-19T21:54:40.167613887+01:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2023-01-19T21:54:40.167723239+01:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2023-01-19T21:54:40.167788905+01:00] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2023-01-19T21:54:40.167603535+01:00] metrics API listening on /var/run/docker/metrics.sock 
INFO[2023-01-19T21:54:40.170878976+01:00] parsed scheme: "unix"                         module=grpc
INFO[2023-01-19T21:54:40.170974976+01:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2023-01-19T21:54:40.171035791+01:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2023-01-19T21:54:40.171071180+01:00] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2023-01-19T21:54:40.173178437+01:00] Using default logging driver json-file       
DEBU[2023-01-19T21:54:40.173237807+01:00] processing event stream                       module=libcontainerd namespace=plugins.moby
DEBU[2023-01-19T21:54:40.173573195+01:00] [graphdriver] priority list: [btrfs zfs overlay2 fuse-overlayfs aufs overlay devicemapper vfs] 
INFO[2023-01-19T21:54:40.173831732+01:00] [graphdriver] using prior storage driver: btrfs 
DEBU[2023-01-19T21:54:40.173884621+01:00] Initialized graph driver btrfs               
DEBU[2023-01-19T21:54:40.184110980+01:00] No quota support for local volumes in /var/lib/docker/volumes: Filesystem does not support, or has not enabled quotas 
WARN[2023-01-19T21:54:40.199540315+01:00] Unable to find memory controller             
DEBU[2023-01-19T21:54:40.199994481+01:00] Max Concurrent Downloads: 3                  
DEBU[2023-01-19T21:54:40.200042648+01:00] Max Concurrent Uploads: 5                    
DEBU[2023-01-19T21:54:40.200075241+01:00] Max Download Attempts: 5                     
INFO[2023-01-19T21:54:40.200141852+01:00] Loading containers: start.                   
DEBU[2023-01-19T21:54:40.200646981+01:00] processing event stream                       module=libcontainerd namespace=moby
DEBU[2023-01-19T21:54:40.206613178+01:00] loaded container                              container=06db9c5b570cc12b20c6eb2c247ae49272ca970aae95f925cc869ae9250bc81a paused=false running=false
DEBU[2023-01-19T21:54:40.228333247+01:00] restoring container                           container=06db9c5b570cc12b20c6eb2c247ae49272ca970aae95f925cc869ae9250bc81a paused=false restarting=false running=false
DEBU[2023-01-19T21:54:40.230598078+01:00] alive: false                                  container=06db9c5b570cc12b20c6eb2c247ae49272ca970aae95f925cc869ae9250bc81a paused=false restarting=false running=false
DEBU[2023-01-19T21:54:40.230822318+01:00] done restoring container                      container=06db9c5b570cc12b20c6eb2c247ae49272ca970aae95f925cc869ae9250bc81a paused=false restarting=false running=false
DEBU[2023-01-19T21:54:40.230944670+01:00] Option Experimental: false                   
DEBU[2023-01-19T21:54:40.230988077+01:00] Option DefaultDriver: bridge                 
DEBU[2023-01-19T21:54:40.231025522+01:00] Option DefaultNetwork: bridge                
DEBU[2023-01-19T21:54:40.231064985+01:00] Network Control Plane MTU: 1500              
WARN[2023-01-19T21:54:40.236395183+01:00] Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI
modprobe: WARNING: Module br_netfilter not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI
, error: exit status 1 
WARN[2023-01-19T21:54:40.244085137+01:00] Running iptables --wait -t nat -L -n failed with message: `modprobe: FATAL: Module ip_tables not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI
iptables v1.8.8 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.`, error: exit status 3 
DEBU[2023-01-19T21:54:40.251945480+01:00] garbage collected                             d=9.357916ms
DEBU[2023-01-19T21:54:40.259728528+01:00] /usr/bin/iptables, [-t filter -C FORWARD -j DOCKER-ISOLATION] 
DEBU[2023-01-19T21:54:40.268837129+01:00] /usr/bin/iptables, [-t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER] 
DEBU[2023-01-19T21:54:40.289963402+01:00] /usr/bin/iptables, [-t nat -D OUTPUT -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER] 
DEBU[2023-01-19T21:54:40.311884786+01:00] /usr/bin/iptables, [-t nat -D OUTPUT -m addrtype --dst-type LOCAL -j DOCKER] 
DEBU[2023-01-19T21:54:40.334849705+01:00] /usr/bin/iptables, [-t nat -D PREROUTING]    
DEBU[2023-01-19T21:54:40.345800971+01:00] /usr/bin/iptables, [-t nat -D OUTPUT]        
DEBU[2023-01-19T21:54:40.354927720+01:00] /usr/bin/iptables, [-t nat -F DOCKER]        
DEBU[2023-01-19T21:54:40.364177617+01:00] /usr/bin/iptables, [-t nat -X DOCKER]        
DEBU[2023-01-19T21:54:40.372976349+01:00] /usr/bin/iptables, [-t filter -F DOCKER]     
DEBU[2023-01-19T21:54:40.381618617+01:00] /usr/bin/iptables, [-t filter -X DOCKER]     
DEBU[2023-01-19T21:54:40.391635773+01:00] /usr/bin/iptables, [-t filter -F DOCKER-ISOLATION-STAGE-1] 
DEBU[2023-01-19T21:54:40.405368665+01:00] /usr/bin/iptables, [-t filter -X DOCKER-ISOLATION-STAGE-1] 
DEBU[2023-01-19T21:54:40.415098655+01:00] /usr/bin/iptables, [-t filter -F DOCKER-ISOLATION-STAGE-2] 
DEBU[2023-01-19T21:54:40.423989756+01:00] /usr/bin/iptables, [-t filter -X DOCKER-ISOLATION-STAGE-2] 
DEBU[2023-01-19T21:54:40.432904283+01:00] /usr/bin/iptables, [-t filter -F DOCKER-ISOLATION] 
DEBU[2023-01-19T21:54:40.442278255+01:00] /usr/bin/iptables, [-t filter -X DOCKER-ISOLATION] 
DEBU[2023-01-19T21:54:40.451207078+01:00] /usr/bin/iptables, [-t nat -n -L DOCKER]     
DEBU[2023-01-19T21:54:40.460268957+01:00] /usr/bin/iptables, [-t nat -N DOCKER]        
DEBU[2023-01-19T21:54:40.469557058+01:00] daemon configured with a 15 seconds minimum shutdown timeout 
DEBU[2023-01-19T21:54:40.469647503+01:00] start clean shutdown of all containers with a 15 seconds timeout... 
DEBU[2023-01-19T21:54:40.470151372+01:00] found 0 orphan layers                        
DEBU[2023-01-19T21:54:40.471187668+01:00] Cleaning up old mountid : start.             
INFO[2023-01-19T21:54:40.471409001+01:00] stopping event stream following graceful shutdown  error="<nil>" module=libcontainerd namespace=moby
DEBU[2023-01-19T21:54:40.472918981+01:00] Cleaning up old mountid : done.              
INFO[2023-01-19T21:54:40.473489350+01:00] stopping healthcheck following graceful shutdown  module=libcontainerd
INFO[2023-01-19T21:54:40.473604313+01:00] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
DEBU[2023-01-19T21:54:40.474525294+01:00] received signal                               signal=terminated
DEBU[2023-01-19T21:54:40.474847830+01:00] sd notification                               error="<nil>" notified=false state="STOPPING=1"
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: modprobe: FATAL: Module ip_tables not found in directory /lib/modules/5.15.74-2-MANJARO-ARM-RPI
iptables v1.8.8 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
 (exit status 3)

Did you reboot after the system update? This looks like an old Kernel. Reboot! Than start docker via systemd. Stop running dockerd directly. Do not do this.

If docker started with systemd after the reboot is not working, check the journal for errors. Do not run dockerd manually!

There’s something very strange about this. Running

pacman -S linux-rpi4

results in

warning: linux-rpi4-5.15.84-1 is up to date -- reinstalling
resolving dependencies...
looking for conflicting packages...

Packages (1) linux-rpi4-5.15.84-1

but

uname -a

results in

Linux torch 5.15.74-2-MANJARO-ARM-RPI #1 SMP PREEMPT Thu Oct 20 16:43:17 UTC 2022 aarch64 GNU/Linux

And yes, the system is rebooted. The reason for running dockerd --debug was to get a more detailed error info and it’s complaining about iptables.

Well, you need to boot the installed Kernel, or you can’t load modules which is required.

ARM is always a little bit special on how it boots, I can’t you help with that.

Believe me, I’ve tried. :smile:

Did you start the socker socket, like it mentions in the journald log?

sudo systemctl enable --now docker.socket

The service does not work without the socket.

It has nothing to do with the socked. OP is not able to boot with the correct kernel. Which means no Kernel modules can be loaded.

The original issue does though.

Check if /etc/fstab has the correct boot device set. If it doesn’t, add it and re-install the kernel package.

Hi! The son is here. :wave:

Output from sudo systemctl enable --now docker.socket:

Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /usr/lib/systemd/system/docker.socket.

After that I tried:
sudo systemctl enable --now docker
and got the following reponse (again):

Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
file system> <dir> <type> <options> <dump> <pass>
PARTUUID=31ff16f8-02 / btrfs  subvol=@,compress=zstd,defaults,noatime  0  0
PARTUUID=31ff16f8-02 /home btrfs  subvol=@home,compress=zstd,defaults,noatime

sudo blkid output is:

/dev/mmcblk0p1: LABEL="backup" UUID="c040a50a-1d6a-4e45-9099-d9d928f90a34" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="e9432cba-01"
/dev/sda2: LABEL="ROOT_MNJRO" UUID="977a521a-ca8e-4dbc-a758-f86f90fe7f6a" UUID_SUB="2e55d431-c92d-4edd-8946-6c4dac3e10f1" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="31ff16f8-02"
/dev/sda1: SEC_TYPE="msdos" LABEL_FATBOOT="BOOT_MNJRO" LABEL="BOOT_MNJRO" UUID="4EB4-16D6" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="31ff16f8-01"

That explains it. The boot partition is not in your fstab.

Please add it, reinstall the kernel and reboot.

1 Like

Thank YOU! For anyone that comes after me, here’s what I did:

  1. blkid to find out PARTUUID for the boot partition
  2. Added PARTUUID=THEPARTUUIDFROMFSTAB /boot vfat defaults,noexec,nodev,showexec 0 0 to /etc/fstab
  3. sudo pacman -S linux-rpi4 to reinstall the kernel.
  4. sudo shutdown -r now to reboot.
  5. sudo systemctl enable --now docker

Started without error messages.

3 Likes

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.