Hi all,
I am trying to get some experience with Docker and Docker Swarm on a single machine.
So far experimenting works well but I am experiencing a strange behaviour in swarm mode. Periodically, but with no fixed regularity, all services restart for no reason. For example, the Gitea container originally created at 02:09, stopped, restarted at 22:06, stopped again (just after 2 minutes) and runs since 22:08.
Every single docker-compose includes the following code snippet:
version: "3"
services:
main:
image: gitea/gitea:latest
deploy:
replicas: 1
restart_policy:
condition: on-failure
So, I first thought that the program crashed insight the container (just ignoring that all programs crashed at the same time :-)) and forced a restart to above policy.
10.255.0.2 - - [22/Feb/2019:09:48:54 +0000] “GET / HTTP/1.1” 500 1672
10.255.0.2 - - [22/Feb/2019:10:14:13 +0000] “GET / HTTP/1.0” 500 1655
10.255.0.2 - - [22/Feb/2019:15:38:35 +0000] “\x03” 400 226
10.255.0.2 - - [22/Feb/2019:15:38:35 +0000] “\x03” 400 226
10.255.0.2 - - [22/Feb/2019:17:03:31 +0000] “GET / HTTP/1.1” 200 5099
10.255.0.2 - - [22/Feb/2019:17:11:38 +0000] “GET / HTTP/1.1” 500 1666
Caugth signal SIGTERM, passing it to child processes…
Caugth signal SIGTERM, passing it to child processes…
[Fri Feb 22 21:07:04.893407 2019] [mpm_prefork:notice] [pid 122] AH00169: caught SIGTERM, shutting down
So, it looks like that the docker deamon send the SIGTERM.
The question is: why?
The system details are:
Containers: 21
Running: 7
Paused: 0
Stopped: 14
Images: 7
Server Version: 18.09.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: fb0bk1eyp70chxwtnmbczdzew
Is Manager: true
ClusterID: yvke64ipr0le72r6lfja9m7lo
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 178.33.26.29
Manager Addresses:
178.33.26.29:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 09c8266bf2fcf9519a651b04ae54c967b9ab86ec
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.0-8-amd64
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 7.8GiB
Name: menkisyscloudsrv29
ID: KKDM:55YP:BSAM:FNOZ:AA55:O2IJ:LTIK:MCEI:X47U:DGJ3:AK5U:EZT3
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community EngineWARNING: No swap limit support
Do you have any idea what’s happening? Where can I get more log details what dockerd is doing?
Thanks!