Php:7.4-fpm-alpine inodes overflow

Hello,

I’ve been developing in a php:7.4-fpm-alpine pod fine until last week, where without any modifications or installations, I’m just having a weird behavior with filesystem.

image

As you can see, the node limits are bounded to just 999 inodes and this is pretty overloaded than 100%, I’ve tried everything but I can’t manage to fix it.

This is a volume binded to a local Windows 10 directory, in a ubuntu linux POD.

Client:
Context: default
Debug Mode: false
Plugins:
buildx: Build with BuildKit (Docker Inc., v0.6.3)
compose: Docker Compose (Docker Inc., v2.1.1)
scan: Docker Scan (Docker Inc., 0.9.0)

Server:
Containers: 9
Running: 9
Paused: 0
Stopped: 0
Images: 6
Server Version: 20.10.10
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: u27y9dto1zl2nbtzx5503mguc
Is Manager: true
ClusterID: oq8dge0ct4cwxxzg4yc8ts0x8
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.65.3
Manager Addresses:
192.168.65.3:2377
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 5b46e404f6b9f661a205e28d59c982d3634148f8
runc version: v1.0.2-0-g52b36a2
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 5.10.16.3-microsoft-standard-WSL2
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 9.728GiB
Name: docker-desktop
ID: UR3Q:ED2Q:S4BW:S2AD:LT4R:UYLY:JYV4:MO26:446I:4C7D:ZD2C:74BO
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support

I’d appreciate any help on this matter.

Thanks in advance.

I wouldn’t mount data folder from Windows host to Docker Desktop. Windows has different filesystem so there is more change for something to be wrong. I tried to mount a folder from my windows host and I could see the same inode limit but the filesystem was not C:\folder but drvfs. It is a filesystem which allows us to set linux permissions on a Windows system. It is good for scripts but I am not sure it was created for large amount of files.

When I am on Windows I always create my projects inside WSL2 on a Linux (ext4) filesystem and open the project being on the Linux filesystem from Windows through the 9p mount. It is available on Windows 10 and improved on Windows 11 and Windows 10 insider. You can actually browse the WSL2 systems from Windows. The Docker desktop filesystem is also available.

The other option is creating the project on the Linux filesystem and working with Visual Studio Code remote mode.

Hello,

Thank you very much, it really helps me to know that this is a general system issue and not my local system issue, so I stop trying it.

Unfortunately, none of the three options fits my requirements, so I’m obliged to move to Linux where we don’t have such kind of issues.

I’ll give anyways a last try to this Docker filesystem browsing from Windows approach.

Thak you !