Docker container won't start as / partition 100% full

New to docker, I have a container that has used up all disk space on /.

docker exec -ti ecstatic_keller sh -c “df -h”
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:4-125834138-d151ae3d32e20fe3b95d29de0c27ea5f1640c23e95b2cf52c4e50c4385f518d9 10G 10G 56K 100% /

I dont know how to clear this up

docker ps -s
ecstatic_keller 9.71GB (virtual 10.4GB)

docker system df

TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 23 20 6.213GB 1.387GB (22%)
Containers 52 9 11.7GB 1.984GB (16%)
Local Volumes 6 5 50.52MB 0B (0%)
Build Cache 0 0 0B 0B

I’d like to free up the disk space inside that container but i do not see the cause of that file taking up all the space, so not sure if I could just delete this container instead or run some thing to clean it up.

You can run

docker container diff ecstatic_keller

to see files written on the container’s filesytem.
You could then use the df command inside the container to find the large files.
Most likely some caches or log files take up that space.
It it is true, you should be able to simply remove the container and recreate it.
I hope you saved all your persistent data on a volume and not on the container’s filesystem.

thanks, I have file such as
C /opt
C /opt/jail
A /opt/jail/driver-jail

C /etc
A /etc/rancher
A /etc/rancher/k3s
A /etc/rancher/k3s/k3s.yaml
C /root
A /root/.bash_history
C /root/.kube
A /root/.kube/cache
A /root/.kube/cache/discovery
A /root/.kube/cache/discovery/localhost_443

pages and pages of
A /root/.kube/http-cache/4bb40b472debfbf67f6eb4b78e8ba0ed
C /tmp
A /tmp/k3s.7b2e51e2ebab.root.log.ERROR.20201111-214146.80
A /tmp/k3s.7b2e51e2ebab.root.log.ERROR.20211117-185629.78
A /tmp/k3s.7b2e51e2ebab.root.log.ERROR.20201113-165946.81
A /tmp/k3s.7b2e51e2ebab.root.log.ERROR.20211114-171121.76
A /tmp/k3s.7b2e51e2ebab.root.log.INFO.20201106-143755.78
A /tmp/k3s.7b2e51e2ebab.root.log.INFO.20211117-043611.81
**most likely this is gigs worth of files

C /usr
A /usr/libexec
A /usr/libexec/kubernetes
A /usr/libexec/kubernetes/kubelet-plugins

Is there anyway I can remove all the /tmp/root.log. files manually? This might be my issue. The container only runs for about 20 seconds and when I did a ls -rhlt /tmp/ it shows no files so I was confused where these files are located.

actually, i have about 10 seconds where the container runs and I saw many files
docker exec -ti ecstatic_keller sh -c “ls -R /tmp”
/tmp:
k3s.7b2e51e2ebab.root.log.ERROR.20191101-154759.75
k3s.7b2e51e2ebab.root.log.ERROR.20191101-154814.72
k3s.7b2e51e2ebab.root.log.ERROR.20191101-154842.73
k3s.7b2e51e2ebab.root.log.ERROR.20191102-163814.67
k3s.7b2e51e2ebab.root.log.ERROR.20191204-035450.76
k3s.7b2e51e2ebab.root.log.ERROR.20201031-144255.76

perhaps I can run the below in time
docker exec -ti ecstatic_keller sh -c “rm -rf /tmp/root.log

the website did not allow for the glob wildcard in front of the root and after .log

thanks, i was able to delete the logs and all is now up