Question about docker permission , AWS ,

  1. So after docker instance is created and access via the terminal, how to control only certain people have access to root, sudo, …etc? e.g. you created user 'HT ’ for me. do I do regular ssh and don’t need to know about there is a docker layer?

  2. We normally setup the apache with virtual host, so that all clients deployed on same apache will all use port 443. My understanding being if I have 3 docker apache instances installed on the same physical server, I need to assign dedicated ports for each of those apache. They cannot share the port 443, right? That means you need additional external IP to house the port 443, and then internally route them to the ports? Or you do another layer of reverse proxy with virtual hosts??

  3. How exactly does “instance replication” work in docker? Basically the benchmark of how AMI work in AWS. So far with the VM setup we have, we can take snapshots of VM, but the snapshots can be restored to the same physical server only. We end up still need to do lots of server (re)creation manually.

If the user has access to Docker then they have access to all root by default. There are authz plugins but these are not commonly used or accessible today (however, I say go for it if interested). Best strategy is for only admins who are trusted with full root access to have access to docker directly and have others who are interested in running containers in your environment submit Dockerfiles / docker run / docker-compose.yml files for auditing by these admins.

You could probably just run one Apache instance in a container w/ vhosts configured this same way which forwards its ports 443 and 80 from the container network namespace to the host. It would be on the same docker network as the containers that you want to route to and would use the Docker default DNS to forward to app0, app1, etc. by name. Those containers have their own port 80 and port 443 (network namespace isolation) and so just forwarding requests for that vhost to a downstream container by name should work if they’re on the same docker network.

Generally speaking, you do not commonly take “snapshots” of containers like you might be accustomed to with a VM (although it is possible to snapshot the filesystem with docker commit). You instead design your container architecture so that if one instance goes away (crashes, etc.) it can be restarted without issue. If you need to install additional software you must revise your Dockerfile, re-build and re-deploy the containers. To deal with the stateful properties of a given app you will need to rig together your own solution using docker volumes, S3, or whatever fits the need of your app. Generally speaking re-building and re-deploying a container is way cheaper than re-building and re-deploying a VM time-wise so they can be treated much more ephemerally.

Thank you so much
My question here is….for multiple docker apache instances

How does SSL configurations get honored? For example, if we needed to disable certain SSL protocols or ciphers, can this be done at the docker level? Or would the host based configuration take precedence?

Because my understanding that we need to do SNI because the port 443 is shared among multiple sites.
It’s relying on apache to tell which site the incoming request trying to visit by evaluating the SNI.
So that it can serve the correct certificate.

If there is a way to configure distinct internal & external IP to specific sites while still honoring the certificates, that will be even better than the current setup.

Thanks.

SSL configurations will be set the same way they always have. Make sure the certs are accessible inside of the Apache container (maybe you could create an independent docker volume for this) and configure Apache to route via SNI. Apache configuration will be the same as ever except that you will docker build (COPY-ing in the conf) and docker run instead of editing the configuration file on the host and doing something like sudo service restart apache2. Don’t get too caught up in the fact that this is all happening in a container. It’s just a normal UNIX process. Configure Apache mostly the same as ever and have it listen on 80 / 443 on the host via --publish flag. Then have it route vhosts appropriate to their other (container) downstreams based on their DNS alias (e.g., container named sitefoo would have DNS entry for sitefoo that resolves correctly on overlay or bridge network).

Depends on your use case. You can’t give external IPv4s to containers directly without some serious shenanigans and if you have a vhost setup like this I’m not sure why you would want to (to access a site from the outside world just use the Apache gateway). The containers’ “internal IPs” will be managed by Docker and could be accessed inside of other containers on the same docker network.

To further clarify what I’m asking, can we have seperate protocol configurations per docker Apache container. For example:

Docker container1: TLS1.2 enabled with high ciphers.
docker container2: TLS 1.0, 1.1, 1.2 enabled with common ciphers.

Sure, I don’t see why not. You can configure Apache however you want, doesn’t matter if you’re running it in Docker or not.