I have 2 machines running the exact same version of docker (both running the same Ubuntu 24.04, on almost identical Linux kernel versions). Here’s a side-by-side diff of both docker info outputs :
Client: Docker Engine - Community Client: Docker Engine - Community
Version: 29.2.0 Version: 29.2.0
Context: default Context: default
Debug Mode: false Debug Mode: false
Plugins: Plugins:
buildx: Docker Buildx (Docker Inc.) buildx: Docker Buildx (Docker Inc.)
Version: v0.30.1 Version: v0.30.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.) compose: Docker Compose (Docker Inc.)
Version: v5.0.2 Version: v5.0.2
Path: /usr/libexec/docker/cli-plugins/docker-compose Path: /usr/libexec/docker/cli-plugins/docker-compose
Server: Server:
Containers: 28 | Containers: 9
Running: 0 Running: 0
Paused: 0 Paused: 0
Stopped: 28 | Stopped: 9
Images: 479 | Images: 36
Server Version: 29.2.0 Server Version: 29.2.0
Storage Driver: overlay2 Storage Driver: overlay2
Backing Filesystem: extfs Backing Filesystem: extfs
Supports d_type: true Supports d_type: true
Using metacopy: false Using metacopy: false
Native Overlay Diff: true Native Overlay Diff: true
userxattr: false userxattr: false
Logging Driver: json-file Logging Driver: json-file
Cgroup Driver: systemd Cgroup Driver: systemd
Cgroup Version: 2 Cgroup Version: 2
Plugins: Plugins:
Volume: local Volume: local
Network: bridge host ipvlan macvlan null overlay Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
CDI spec directories: CDI spec directories:
/etc/cdi /etc/cdi
/var/run/cdi /var/run/cdi
Swarm: inactive Swarm: inactive
Runtimes: io.containerd.runc.v2 runc Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc Default Runtime: runc
Init Binary: docker-init Init Binary: docker-init
containerd version: dea7da592f5d1d2b7755e3a161be07f43fad8f75 containerd version: dea7da592f5d1d2b7755e3a161be07f43fad8f75
runc version: v1.3.4-0-gd6d73eb8 runc version: v1.3.4-0-gd6d73eb8
init version: de40ad0 init version: de40ad0
Security Options: Security Options:
apparmor apparmor
seccomp seccomp
Profile: builtin Profile: builtin
cgroupns cgroupns
Kernel Version: 6.8.0-90-generic | Kernel Version: 6.8.0-88-generic
Operating System: Ubuntu 24.04.3 LTS Operating System: Ubuntu 24.04.3 LTS
OSType: linux OSType: linux
Architecture: x86_64 Architecture: x86_64
CPUs: 12 | CPUs: 4
Total Memory: 31.27GiB | Total Memory: 7.756GiB
Name: renndlxrdl4448 | Name: rennslxcomp66
ID: e6c2d1bb-7c13-47a6-8859-2b55b8bd209d | ID: 24e84954-8275-4fa2-9131-ec789d763724
Docker Root Dir: /work/docker | Docker Root Dir: /workspace/docker
Debug Mode: false Debug Mode: false
Experimental: false Experimental: false
Insecure Registries: Insecure Registries:
::1/128 ::1/128
127.0.0.0/8 127.0.0.0/8
Live Restore Enabled: false Live Restore Enabled: false
Firewall Backend: iptables Firewall Backend: iptables
I’m running into build issues because of /tmp permissions :
Machine 1 :
$ docker run -it --rm ubuntu:focal ls -ald /tmp
drwxrwxrwt 2 root root 4096 Apr 4 2025 /tmp
Machine 2 :
$ docker run -it --rm ubuntu:focal ls -ald /tmp
drwxr-xr-t 2 root root 4096 Jan 19 09:25 /tmp
This makes no sense to me. On the first machine no issue, on the other one apt-get fails because /tmp is not accessible
This is even more puzzling because apt-get the command is run as root and there should be no permission issue for root.