'docker service logs' seems to hang after printing large output in a short time

Hello,

I’m not good at English, plz understand it.
And if this topic is a duplicated one. I’m sorry about it.

I ran a service and found… the service log output was strange, like it becomes hang or stalled after it prints large number of lines rapidly. It’s like… a sort of ‘log queue full’

Here is my docker version:

Client:
 Version:       18.03.0-ce
 API version:   1.37
 Go version:    go1.9.4
 Git commit:    0520e24
 Built: Wed Mar 21 23:09:15 2018
 OS/Arch:       linux/amd64
 Experimental:  false
 Orchestrator:  swarm

Server:
 Engine:
  Version:      18.03.0-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.4
  Git commit:   0520e24
  Built:        Wed Mar 21 23:13:03 2018
  OS/Arch:      linux/amd64
  Experimental: false

At first, I saw this problem from the log of PostgreSQL service. I wanted to reproduce it with a simple example so I made a short Perl script:

$ cat output_large_log.pl
# just prints from 1 to 10000000.
for ( 1 .. 10_000_000 ) {
   print "> $_\n";
}

If I run this script using Perl image in foreground, it works well:

$ docker run --name test -it -v ${PWD}:/app perl:5.26 perl /app/output_large_log.pl
> 1
> 2
...
> 10000000

If I run it in background and try to docker logs --follow it, it works well again:

$ docker run -d --name test -it -v ${PWD}:/app perl:5.26 perl /app/output_large_log.pl && docker logs -f test
<container ID>
> 1
> 2
...
> 10000000

But, when I run it via Docker service and try to see its log, only a part of the logs are printed and the rest of logs are printed very slowly. It took 10 seconds per a line.

$ docker service create --name test --mount type=bind,src=${PWD},dst=/app --constraint 'node.hostname == YRECOMT1' --restart-condition none perl:5.26 perl /app/output_large_log.pl
rshzv9prczc13md0azql4ghsz
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Waiting 2 seconds to verify that tasks are stable...

$ docker service logs -f test
test.1.mevziwr6ljic@YRECOMT1    | > 1
test.1.mevziwr6ljic@YRECOMT1    | > 2
...
test.1.mevziwr6ljic@YRECOMT1    | > 1159690  # stopped here
test.1.mevziwr6ljic@YRECOMT1    | > 1159691  # printed after about 10 seconds
test.1.mevziwr6ljic@YRECOMT1    | > 1159692  # after 10 seconds
test.1.mevziwr6ljic@YRECOMT1    | > 1159693  # after 10 seconds
...

Of course the service itself has been shutdown state long ago(it just do a loop and exit). But this also happens even when I use docker service logs without --follow.

$ docker service logs test
...
test.1.mevziwr6ljic@YRECOMT1    | > 1159690  # stopped here
test.1.mevziwr6ljic@YRECOMT1    | > 1159691  # printed after about 10 seconds

But, docker logs on the “task” container instead of “service” works well:

$ docker logs test.1.rnzkscj4pzy365dgxt3n5sngx
> 1
...
> 10000000

docker service logs --tail 10 shows that a successful result of the task:

$ docker service logs --tail 10 test
test.1.mevziwr6ljic@YRECOMT1    | > 9999991
test.1.mevziwr6ljic@YRECOMT1    | > 9999992
test.1.rnzkscj4pzy3@YRECOMT1    | > 9999993
...
test.1.mevziwr6ljic@YRECOMT1    | > 10000000

So I guess there might be a problem related to a buffer capacity? size? limit? whatever.

As I’ve said above, I found this problem when I was seeing the log of a PostgreSQL service. My collegue was trying to insert 300,000 rows in a DB and the log hang after showing about 56000’th rows, then it prints a log for inserting a row every 90 seconds.

If this is just a problem of log, it would be tolerable. But he said that it seemed that the PostgreSQL service itself hang after such large-scale inserting. I’m investigating it for now.

Any advice would be appreciated.
Thank you.

This is my docker info output, if it would be helpful:

Containers: 13                                                                                                                                                                                                         
 Running: 10
 Paused: 0
 Stopped: 3
Images: 316
Server Version: 18.03.0-ce
Storage Driver: devicemapper
 Pool Name: docker-8:4-12885517380-pool
 Pool Blocksize: 65.54kB
 Base Device Size: 10.74GB
 Backing Filesystem: xfs
 Udev Sync Supported: true
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 11.69GB
 Data Space Total: 107.4GB
 Data Space Available: 95.68GB
 Metadata Space Used: 20.97MB
 Metadata Space Total: 2.147GB
 Metadata Space Available: 2.127GB
 Thin Pool Minimum Free Space: 10.74GB
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.140-RHEL7 (2017-05-03)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: w65vfus5doyotlau9q8c1793u
 Is Manager: true
 ClusterID: lgc2xrz6qhog5v73cu0k0w47d
 Managers: 1
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: 192.168.53.31
 Manager Addresses:
  192.168.53.31:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-693.17.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 20
Total Memory: 62.64GiB
Name: *******
ID: DWIN:NA7H:OJLG:4H3J:Z5MV:M3RY:HCYZ:IKGT:I2DM:JYA3:T7TI:EYZX
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: gypark
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 192.168.*******:20000
 ********:20000
 127.0.0.0/8
Live Restore Enabled: false
1 Like

It’s too bad this went for three years with no response. I have the same issue and it’s a problem I can’t seem to resolve in the docs. My workflow attempts to save the logs from each container before removing the container, but when the logs are too long, the docker service logs name command just hangs forever, preventing anything else from finishing up. I can do a --tail to determine the container actually completed its job, but I can’t retrieve a log for record keeping.

2 Likes

I have the same problem. Output of docker service logs ...
prints some lines and then it is stuck. Even if I wait for few minutes there is no more output.