Docker Performance Enlightenment

For the past few years, our business has deployed our python code directly onto a server and run it manually through supervisor.

Over the past few months though, I’ve been fed up with this workflow and have moved our code into containers for quick, and hopefully, easier deployments.

One question still lingers in my mind though before we move to production with it.
Can it handle the load?

Right now, I have got a server the same size as the one we have had before (I know, I know, we don’t have swarm set up yet. Im working on it…). I deploy our containers onto the server with a web server (nginx) and then scale up to three containers.

I know that with our previous setup, the python process would “take up” if you will the entire space that the server had to offer in order to fulfill the request, transaction, etc.

However, with containers, I think of them a self-contained VM’s that run a process that is sent to them. Therefore restricting the amount it can “take up” on the actual server the entire application is running on.

Perhaps Im missing some key concept of containerization, but I feel as though when our users start hitting the server with containers, the entire thing is going to fall apart.

Anybody care to enlighten me?

Also, do you recommend a way to “stress test” our application? I can do it on the staging server which will be almost an identical copy of our production server.