Optimal deployment workflow using docker-compose or alternatives?

Hello everybody!
Im trying to find the best way to deploy an application in production multiple times with different variables on the same host and i’m curious if there is a better workflow/approach. Just started working with docker and its mind-blowing for me. :smiley: BIG WOW

  • The application consists of multiple containers (http-server, database, …) that are orchestrated using docker-compose (ports, volumes, …)
  • Using the .env file, variables are passed into the docker-compose.yaml. (ports, …)
docker-compose -p name1 up -d

Now, running this application can is done by providing the -p flag using docker-compose. This way, new containers, networks, volumes will be created for each project, and, if the project name is unique, the mapping fits (does it always?).

After editing the .env file to map the http part of the application to a different port on the host, I would start a new application just by using a different project name.

docker-compose -p name2 up -d

Everything is working fine, and even if I

docker-compose -p name2 down
docker-compose -p name2 up -d

The volumes will be mapped right, (because and if the project name is correct)

I wonder if this is the intended way to deploy an application on the same host using docker. Just changing the project name does the job and is very easy, but editing the .env file seems a bit “hacky”.

Are there better ways? I would be super happy for any hint in the right direction. :slight_smile: Maybe with using kubernetes? (even though it’s just one host machine?) If the way described above is “correct”… I am ashamed to not use docker earlier. Well, I am anyways