Docker Community Forums

Share and learn in the Docker community.

VMs to container planning


#1

I’m in the process of dumping my server core 2012 virtual machines and going containers using Linux and Docker and need some help with planning. I have a few questions that I need expert advice on and perhaps you can help.

I currently run 3 VMs on each server, one for web hosting, one for database and one for bind or public name resolving. My thought is to run on the host, mongo, bind and my app in seperate containers on the host. Then program the SSL certificates on the host so my app container can use them. My other notion or thought is why run bind in a container at all.

Next question SSL certificates. I would like to keep them portable in the apps container. Include the PFX file and somehow register them in open SSL. My certificates have already been generated on the old system. Im not sure what to do with SSL and my .Net Core Apps. Perhaps there’s already a proven method for this using containers.

The last question is a .Net Core issue that I didn’t think through good enough. In IIS server I can write to the WWWRoot, things like uploaded pictures and record them in the database. But I need to make them persistent, like for when I go Kunerneres and spawn more containers. This I have the notion that if I spawn a new container. And the user uploads a picture, that the one spawned container will receive the image, and write it within that containers WWWRoot. Then the other spawned containers won’t have the image. I do keep a copy of the image as base64 in mongo.

So where am I at currently?
I can now compose mongo with static data storage that persist with a compose file. Set it’s users, permissions, connection string, network and with. I can publish and compose the my app to connect to mongo securely. Now I need to work with bind and choose a Linux Flavor, probably Unbuntu although I am a Centos guy. I wasn’t able to get Docker Xomoise to install in Centos. Curl issue just downloading html error page.

Any help would be appreciated.


#2

@jkirkerx, so I can’t speak to implementation specifics, but before you walk down the docker-compose road, have a look at Swarm. It will be much easier to maintain across multiple nodes, and it will help you manage your certificates (using Docker secrets or volumes). Swarm also implements a mesh routing overlay network, so you don’t really need to mess with a DNS server… just point all of your domains at all of your IPs and let Docker handle the rest (you might want to implement something like Caddy or nginx to reverse proxy using the request). All of this will enable you to add each of your physical machines to the swarm, and not worry about where apps get scheduled for execution. Then you can scale up more easily, and run rolling updates to your apps. Finally, expose your external ports in the swarm on your reverse proxy host, then use service name resolution internally.

If none of that made any sense, sorry. I may not be understanding your use-case.


#3

It makes sense to me and that’s my end goal.

Didn’t know that level of tuning or operation existed.

Let me take some baby sets first. I’m gonna try and hand build and deploy my app and data base first today.


#4

Oh this was my first question. My bad.

I built an Ubuntu Bionic Beaver server with Docker last week. Got the Mongo and Bind Named working in containers using compose. Got the static folders working to save the data. I’m happy with the server and getting off server 2012 with virtual machines.

Just struggling with getting my .Net Core app working, that’s what I meant with baby steps.

But I will adopt your suggestion and work towards it when I get the next server up and running.

Thanks!!!