I am fairly new to Docker, so please let me know if I can post any other information that would be helpful.
The short of the problem is that when I run a simulation program in a container, it takes MUCH longer to execute than when just running on the bare host. Anywhere from 3-11 times slower.
I containerized a simulation that uses Adevs (a C++ library for discrete event simulations) and a patched version of QEmu (with an image running some live software that interacts with the Adevs sim). The simulation is pretty basic: A “serial echo” device was modeled in adevs, which listens for some message from the emulator, and then sends a message back to the emulator upon receipt. The emulator is started from the simulation; when it boots, it runs a script that sends the message via a virtual serial connection.
This simulation finishes in about 20 seconds when run on the host. Running in a container, based on ubuntu, the same simulation takes 65 seconds. I ran a cAdvisor container to look at the CPU usage. Running on the host, the CPU usage looks like this:
The simulation starts at 11:31:45 and ends 20 seconds later. When running the same simulation in a container:
[place holder: I will try to post this picture in another post]
Here the container was started around 11:53:45 and runs for the next 65 seconds. As you can see, the container is using a significant amount of CPU power, running on all 7 cores at full bore. During those first peaks is when the QEmu is booting; it finishes booting at about 11:54:15, and it sends the message to my adevs serial echo model. It settles down after it sends the message. The adevs model is set up to pause a few seconds before sending a message back; the three little peaks at the end correspond to three messages being sent from the adevs object to the emulator.
It’s my understanding that containers aren’t supposed to slow down processes by this much (I also ran another more complex simulation, also with QEmu embedded, and that one ran 11 times slower in the container!). The graphs of CPU usage have such different structure, that it made me curious to what would be causing that difference. My first thought was that it had something to do with the messages being passed from the adevs side to the emulator side, and that having to go through the kernel. Also, there is a lot of threading going on in order to keep the emulator in time with the simulation, so maybe that could be an issue. Anyways, thought I would get some expert opinions. Let me know if I should post any other info.