Docker Community Forums

Share and learn in the Docker community.

CPU spiking for hours


(Dylangrafmyre) #1

CPU started spiking for hours from process `com.docker.driver.amd54-linux(11194)

Killing the process and docker for mac resolved the issue.

Actual behavior

Information

OS: 10.10.5
Build: 5404


Docker is unresponsive and eats 200% cpu after macbook wakeup
(Josh Reichardt) #2

Seeing this as well.

The high CPU really eats up battery life.


(Jan Weitz) #3

Still a problem:

Version 1.11.0-beta7 (build: 5830)
8b45bc3afc0ca2363890032ac63b003d80ccc242

14.5.0 Darwin Kernel Version 14.5.0: Mon Jan 11 18:48:35 PST 2016; root:xnu-2782.50.2~1/RELEASE_X86_64 x86_64

docker ps hangs.

I would send you a tar.gz of my logfiles, but I am not allowed to.


Docker for mac beta still pins cpu when comming out of suspended mode
(Jan Weitz) #4

Version 1.11.0-beta7 (build: 5830)
8b45bc3afc0ca2363890032ac63b003d80ccc242

Docker Daemon prevents OSX from Sleeping

14.5.0 Darwin Kernel Version 14.5.0: Mon Jan 11 18:48:35 PST 2016; root:xnu-2782.50.2~1/RELEASE_X86_64 x86_64


(Dave Tucker) #5

Hi!

Thanks everyone for reporting this. We’re aware of some issues with CPU spikes but they are really hard to track down.

If this happens again could you:

  1. Tell us what you were doing immediately before this problem happened

  2. Capture the output of sudo dtruss -p <pid> where <pid> is the PID in Activity Monitor and send it to us!

Thanks!

– Dave


(Christian Jul Jensen) #6

I have that same problem, I don’t know exactly what triggers it, but it think it might be related to file i/o. I made traces, where should I send it?


(Magnus Bergmark) #7

Hi! I have this problem too after installing Docker for Mac Beta yesterday. I’ll try to give as much details as possible (as it’s happening right now as we speak).

What I have
OS X Yosemite 10.10.5
MacBook Pro 2.5 GHz Intel i7, 16 GB RAM, SSD

Docker
Version 1.11.0-beta8 (build: 6072)
3c1bfeb0e86a9403f82302edfea4c4987cc2cb32

What I did

I migrated from Docker Toolbox yesterday. I manually killed the VM machine using docker-machine rm default afterwards.

I started up the computer after leaving it to sleep from the end of the workday yesterday. Immediately when the computer woke up, com.docker.driver.amd64-linux was using 400% system CPU. It’s been going at it for half an hour now.

I don’t have any Time Machine drive plugged in right now, but I usually have one plugged in.
I used VPN yesterday, Cisco AnyConnect, and I did not manually disconnect before putting my machine to sleep.

I don’t have any docker apps running. I mainly use Docker for test running right now, and I haven’t even used it since installing. I just tried docker version and docker ps to see that it worked properly after installing.

After collecting the data below, I tried to diconnect from the network. Did not help.
I then Quit (“Quit Docker”) using the menu icon, and that shut everything down perfectly.

I started it again, and it took some CPU for a few seconds and then went dormant again.

Data

› ps -p 35864
  PID TTY           TIME CMD
35864 ??       135:24.07 /Applications/Docker.app/Contents/MacOS/com.docker.driver.amd64-linux -xhyve /Users/man
› sudo dtruss -s 35864
dtrace: failed to execute 35864: file is set-id or unreadable [Note: the '-c' option requires a full pathname to the file]

(dtruss does not work on parent, grand-parent or great grand-parent (Docker) process either. Same error.)

Since that doesn’t seem to work, I’m pasting whatever I can find.

Memory size: 4.4 MB
Virtual memory size: 548.99 GB
Shared memory size: 5.9 MB
Private memory size: 332 K

Open files

/Users/mange/Library/Containers/com.docker.docker/Data
/Applications/Docker.app/Contents/MacOS/com.docker.driver.amd64-linux
/System/Library/Frameworks/Hypervisor.framework/Versions/A/Hypervisor
/System/Library/Frameworks/vmnet.framework/Versions/A/vmnet
/Applications/Docker.app/Contents/Resources/lib/mirage-block.so
/System/Library/PrivateFrameworks/Netrb.framework/Versions/A/Netrb
/Users/mange/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/console-ring
/usr/lib/dyld
/private/var/run/dyld_shared_cache_x86_64h
->0xee4b6ef537add22d
/dev/null
->0xee4b6ef5419f38ad
->0xee4b6ef537adfa0d
->0xee4b6ef5419f38ad
->0xee4b6ef5417ac0dd
->0xee4b6ef55d6a81a5
->0xee4b6ef537adeb9d
->0xee4b6ef5419f164d
->0xee4b6ef537adf32d
->0xee4b6ef53fdbc03d
/Users/mange/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
/dev/random
/var/tmp/com.docker.vsock/connect
/var/tmp/com.docker.vsock/00000003.00000948
/var/tmp/com.docker.vsock/00000003.000005f5
->0xee4b6ef5419f419d
->0xee4b6ef5419f0f6d
->0xee4b6ef5419f37fd
->0xee4b6ef53fdba35d
/dev/ptmx
count=0, state=0x2
count=1, state=0x2
->0xee4b6ef5472b5a0d
->0xee4b6ef5419f17ad
->0xee4b6ef5413c2afd
->0xee4b6ef54ca660dd
/var/tmp/com.docker.vsock/connect
/var/tmp/com.docker.vsock/connect
->0xee4b6ef53b8cfb05
/var/tmp/com.docker.vsock/00000003.00000948
/var/tmp/com.docker.vsock/00000003.00000948

Sample (using “Sample” in Activity Monitor)

Is there anything else I can do to help?


(Manglu) #8

Hi Dave ,

Similar to what Magnus described, I had used Docker for Mac Beta yesterday and had not shutdown the machine.

I started working today and noticed that the CPU was high (I had not been working on the Docker stuff for the whole of the day).

I had to kill the PID so that I could use the machine for my other purposes.

Thanks


(Manglu) #9

Hi

I updated Docker from beta 8 to Version 1.11.0-beta9 (build: 6388) this morning.

I had not done any docker related activity and my CPU is spiking pretty high.

Here is the output from dtruss ( I don’t know anything about dtruss to interpret the values being printed).

$ sudo dtruss -p 2021
Password:
SYSCALL(args) = return
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0

select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = 0 0
kevent(0x14, 0x0, 0x0) = -1 Err#4
select(0x0, 0x0, 0x0, 0x0, 0x18DC15ED8) = -1 Err#4

I killed the PID of the process as it was chewing up far too much CPU.

The CPU activity of this process is shown in the image below


(Dylangrafmyre) #10

The system is just sitting idle.

select(0x0, 0x0, 0x0, 0x0, 0x18DE9BED8)		 = 0 0
psynch_cvwait(0x10CFFF2C0, 0x312C1C01312C1D00, 0x312C1C00)		 = -1 Err#316
psynch_cvsignal(0x7FBE09005318, 0x169C5A00169C5B00, 0x169C5A00)		 = 257 0
psynch_cvwait(0x7FBE09005318, 0x169C5A01169C5B00, 0x169C5A00)		 = 0 0
psynch_cvsignal(0x10CFFF2C0, 0x312C1D00312C1E00, 0x312C1C00)		 = 257 0
psynch_cvwait(0x10CFFF2C0, 0x312C1D01312C1E00, 0x312C1C00)		 = 0 0
psynch_cvwait(0x10CFFF2C0, 0x312C1E01312C1F00, 0x312C1E00)		 = -1 Err#316
psynch_cvsignal(0x7FBE09005318, 0x169C5B00169C5C00, 0x169C5B00)		 = 257 0
psynch_cvwait(0x7FBE09005318, 0x169C5B01169C5C00, 0x169C5B00)		 = 0 0
psynch_cvsignal(0x10CFFF2C0, 0x312C1F00312C2000, 0x312C1E00)		 = 257 0
psynch_cvwait(0x10CFFF2C0, 0x312C1F01312C2000, 0x312C1E00)		 = 0 0
psynch_cvwait(0x10CFFF2C0, 0x312C2001312C2100, 0x312C2000)		 = -1 Err#316
psynch_cvsignal(0x7FBE09005318, 0x169C5C00169C5D00, 0x169C5C00)		 = 257 0
psynch_cvwait(0x7FBE09005318, 0x169C5C01169C5D00, 0x169C5C00)		 = 0 0
psynch_cvsignal(0x10CFFF2C0, 0x312C2100312C2200, 0x312C2000)		 = 257 0
psynch_cvwait(0x10CFFF2C0, 0x312C2101312C2200, 0x312C2000)		 = 0 0
psynch_cvwait(0x10CFFF2C0, 0x312C2201312C2300, 0x312C2200)		 = -1 Err#316
psynch_cvsignal(0x7FBE09005318, 0x169C5D00169C5E00, 0x169C5D00)		 = 257 0
psynch_cvwait(0x7FBE09005318, 0x169C5D01169C5E00, 0x169C5D00)		 = 0 0
psynch_cvsignal(0x10CFFF2C0, 0x312C2300312C2400, 0x312C2200)		 = 257 0
psynch_cvwait(0x10CFFF2C0, 0x312C2301312C2400, 0x312C2200)		 = 0 0
psynch_cvwait(0x10CFFF2C0, 0x312C2401312C2500, 0x312C2400)		 = -1 Err#316
psynch_cvsignal(0x7FBE09005318, 0x169C5E00169C5F00, 0x169C5E00)		 = 257 0
psynch_cvwait(0x7FBE09005318, 0x169C5E01169C5F00, 0x169C5E00)		 = 0 0
psynch_cvsignal(0x10CFFF2C0, 0x312C2500312C2600, 0x312C2400)		 = 257 0
psynch_cvwait(0x10CFFF2C0, 0x312C2501312C2600, 0x312C2400)		 = 0 0

Killing the process and restarting docker resolved the CPU issues and docker ps.


(Dave Tucker) #11

@christianjul you can paste them here, or send to beta-feedback@docker.com


(Dave Tucker) #12

Thanks to everyone who’s supplied traces so far. We’re looking in to it…


(Sgroves) #13

I’m getting what appears to be a similar issue when loading pages from a Docker environment in Chrome. The tab’s loading spinner spins counter-clockwise, which indicates Chrome is still trying to connect to the host.
Apparently I’m not allowed to upload files as a new user, so I can’t post the full trace. This line appears to be the culprit:
2211/0x6c6f: 415506 14317059 6 psynch_cvwait(0x7FEAA5719FA8, 0x9610000096500, 0x96100) = 0 0

If there’s a way I can provide the full trace, let me know. Seems like a weird limitation for this sort of forum.


(Seandon Mooy) #14

Hello! I’m also having this issue, as are a handful of others over in this thread https://github.com/docker/for-mac/issues/2601

We’ve provided a bit of dtruss output and are ready and willing to recreate and debug!

@davetucker - is there any movement on this? Would love to lend a hand if possible!

Thanks!