Docker Swarm under AWS: best way to connect to dynamically defined RDS instance

Hi. I’m trying to figure out the best way to implement the following deployment model as part of CI/CD pipeline. I’m deploying a simple webapp consisting of several containers, with RDS DB used for persistance. THere are several instances of the stack running in the swarm at every single momemnt (they are basically the result of CI/CD pipeline execution and correspond to different versions of the app). Now, when we ran containerized DB, it was fairly straightward (the only hicup being the configuration of AWS LB, which we overcame). In final setup though, every time we deploy a new instance of a stack, we need to instantate and wire it up to a new instance of RDS spin off on the fly from a snapshot as part of stack deployment flow.

The question is: AWS RDS is an ‘external’ link to the SWARM. And the FQDN of RDS is known only at deployment time. I’m trying to find the best way to define this configuration for my stack. In the application, the FQDN is referenced in webapp config files (aka datasource.properties file). More than one container can refer to the same DB.

What’s the best way to implement this? One way to do that is to define DB url in env-t variable and pass it to the containers at instantiation time… But i’d rather have containers referencing RDS DB through docker ‘service’ name (same as we currently have for service-to-service integration) , which would require updating swarm DNS entries , which i’m not sure whether it’s possible or not. Or maybe using one of the reverse proxying solutions (like Traefik).

Any suggestion is greatly appreciated.

Vlad

Maybe you can use docker config and pass in the config with docker service create. The config file would contain the RDS hostname:

i’ve ended up gng with env-t variables… The ‘extra_hosts’ config entry would have worked perfectly, but unfortunately RDS instance doesn’t have ‘permanent’ IP address and amazon says you should always use endpoint FQDN…