Docker Community Forums

Share and learn in the Docker community.

Is Docker right for me?


(Speedpacer) #1

We have an in-house developed posix-c style “shared memory framework” that we use to run simulations of multiple computer models that communicate to one another using shared memory semaphores.

There are three parts to it, the shared memory server, the models which are derived from Matlab Simulink models and exported as generated C and then integrated with the server using wrappers, and then prototype code that simulates the algorithms that monitor the input/output of the models and act and report on events (one type of model fails, another kicks on to take it’s place).

It all runs on CentOS 6.9 and we use Subversion with svn+ssh. The simulations produce GB and GB worth of output files and the repositories themselves are 30ish GB. We looked at Vagrant to solve the “works on mine, not on yours” fiasco but it didn’t seem to add a lot more value that Ansible was already providing since we still needed this monster 100GBish virtual drive.

Would Docker be a good candidate for this case? If so, I’m trying to wrap my head around migrating over… some users are on Macs and others Windows and we need SSH keys so I have to create a user account on the docker container and pull their SSH keys from a Django REST API. And then I’m guessing all of the source code and data that’s written out during a simulation should go in a separate container/volume? How would I do that in a Dockfile? I did some googling but couldn’t find an easy way, like reading the system environment variable %USERNAME% if windows, $USER if mac, etc.

Any advice will be greatly appreciated!


(Sam) #2

docker shines when u need to deploy many of the same system and want to minimize runtime platform issues

so, i would ask it a different way…

  1. what work is required to make ‘another’ runtime on a different platform
    setup, testing, and who does that, you, the user, …
  2. how often do you need to do this?
  3. does your solution run on multiple different computers at once today
  4. does docker support running on the platforms the users would use
  5. can you make docker container(s) for the function(s).

(Speedpacer) #3

I’m the DevOps engineer (and part-time developer) in a group of roughly 30 developers. I’ve skimmed the surface of Docker docs in the past but what has prompted me to take a more in-depth look into it was the huge performance hit our virtual machines took after the Meltdown and Spectre patches. I’m curious to see how Docker will perform in comparison. Our simulations can take anywhere from 1-10 hours depending on how many models we run and these recent updates have slowed it down by ~6x.

So I’ve installed it and installed the development tools and other packages we use, created myself an account and some ssh keys and checked out all of the repos. The entire container came to about 43GB.

Then I went to make a change in my .bashrc file with vim and couldn’t use the arrow keys, which lead me down some ConEMU rabbit hole.

Anyway, I’m the one responsible for the web/subversion server, simulation servers and virtual machines, and the CM, patching, provisioning, auth, etc. I provide the VMs to the users and they check out their code and start developing and testing. Currently, I have the install of VirtualBox and the VM scripted and they install it from our website. Then I manage them remotely with Ansible.

The only platform is CentOS 6.9. We’re kinda stuck with that for compatibility reasons at the moment.

By making containers for the functions, do you mean one for the model integration, one for the algorithm developers and one for the shared memory server? Honestly, I’m still not 100% clear on the containerizing concept, at least in terms of how it would fit our use case.


(Sam) #4

i don’t think docker will help on the hack performance impacts, as docker depends on the host OS to actually execute… (docker is really a set of tasks with specific id affinity)…

due to the size, you might want to mount a volume to get access to the data rather than copy it into the container.

By making containers for the functions, do you mean one for the model integration …

yes…

containerizing sometimes is easy… database server here, web server there, redis server there. etc…
but sometimes, you really only had ONE thing, but replicating it is a pita… docker can help there…

but UI bound apps, and high cpu/performance apps don’t really gain… can’t go faster than the processor of the host.

docker is really about repeatable deployment and reduced customization in a platform independant way…
(rememeber the java manta, write once, run anywhere… or the same for virtual machines…) docker is actually getting pretty close… windows and mac are causing troubles in the implementation… but not unexpected…