I want to build/host multiple static sites with docker-compose and serve them through an nginx reverse proxy. And to be able to build and deploy each site separately, trying to avoid building them along with the nginx image (multi-stage builds are good, but if I want to update one of those sites, I’ll find myself re-building the entire nginx image for every updated site and that’s not what I want).
Basically, I’m trying to create some “throw-away” containers that mounts the image for each static site, build (npm run build) and “store” them (this is actually not working), to later be used by a shared volume that carries the artifacts to the nginx container.
FROM node:8.11.1-alpine as builder
USER node
RUN mkdir -p /tmp/app
WORKDIR /tmp/app
COPY ./package*.json ./
RUN npm install
COPY --chown=node . .
RUN npm run build
FROM nginx:1.13.12-alpine
WORKDIR /var/www/app
COPY --from=builder /tmp/app/dist/ /var/www/app/
I can see the Dockerfile is creating different artifacts (/js/app.${HASH}.js) on each build, but once the image is mounted, I always see the same files in that named volume… Is there any command I can use as an entrypoint, so before this “data” container exits, it “provision” that volume with the new artifacts?
There must be a better approach to this, for sure. I just can’t realize it yet.
There are a variety of tutorials, suggestions, and guidance on using nginx to serve multiple sites from a single node, including some on this forum. Here are a few links to some older sites I found using my favorite search site that you may want to review for ideas; a broader search using “docker nginx multiple” should reveal quite a few more. Some of these articles are a couple of years old, hence rely on older versions of the docker engine, nginx, and so forth, so YMMV.
Thanks guys, I probably should have explained myself a little better.
I want to build the artifacts of multiple SPAs into docker images.
Then with docker compose, pull them and extract the generated static assets (artifacts) to a shared volume (/var/www) that nginx must use (/var/www/app1, /var/www/app2, etc) in order to serve each static site.
The problem I currently have, is that once I mount the “app1” image for first time, it correctly privides the volume with the generated (at build time) artifacts. But the second time I pull a new tag/version of that image, the volume now is not empty anymore and the container filesystem is going to prioritize those old files instead of the new ones coming in the new image.
If I’m right and that’s actually the problem,
I should be able to empty or somehow overwrite those files every time I do docker-compose up -d --build app1
Oh, I see what you mean now. That is tricky. I’ve had this problem before. In this instance I’m pretty sure it would be as simple as running a docker-compose down -v before re-running the docker-compose up. That would drop the volume before pulling the latest tag for the apps ( as long as you don’t have any other volumes that you do want persisted ). When it comes back up it will re-create the volume with the contents from the latest tag.
Another option is that you could have the app containers copy ( or move ) the files on top of the shared mount when they start up. That way, even if the old files are already there, each app container would just overwrite them when it starts up.
That’s interesting, docker-compose down -v won’t work because of what you said. I have more volumes. Still, I’m not sure if that would work because the volume is still mounting files from the host, which have priority over the container filesystem.
Another option is that you could have the app containers copy ( or move ) the files on top of the shared mount when they start up
That would work! But the question is how can I automate that? I’ve read about docker cp but I can’t do that inside a service’s command or entrypoint (can I?).
I’ve been googling a lot about how can I overwrite the files of that shared mount on container start up, without success
When you build the app Docker images you would put all of the static assets in a directory like /tmp/app1. The entrypoint for the app container would then be cp -R /tmp/app1 /var/www/app1. So something like:
App1.Dockerfile:
FROM node:8.11.1-alpine as builder
USER node
RUN mkdir -p /tmp/app
WORKDIR /tmp/app
COPY ./package*.json ./
RUN npm install
COPY --chown=node . .
RUN npm run build
FROM nginx:1.13.12-alpine
WORKDIR /var/www/app
COPY --from=builder /tmp/app/dist/ /tmp/app1
ENTRYPOINT "cp -R /tmp/app1 /var/www/app1"
When the App1 container runs it will have the shared mount at /var/www and and will overwrite the contents of /var/www/app1 because of the entrypoint.
I just had another idea: If you don’t mind giving the CAP_SYSADM ( I think that’s the one ) privilege to the App containers, you could bind mount the app files to the shared mount in the app container’s entrypoint. That would prevent the overhead of running a full copy all of your static assets from the container to the shared mount. It would mean that the app container would have to stay running though, because the mount command is running inside the app container. It would look something like this:
App.Dockerifle:
...
FROM nginx:1.13.12-alpine
WORKDIR /var/www/app
COPY --from=builder /tmp/app/dist/ /tmp/app1
ENTRYPOINT "mount -B /tmp/app1 /var/www/app1 && while true; do sleep 1; done"
I think you would also need to mount the /var/www volume as rshared ( to propagate the bind mount ).
This method is probably the best solution to this problem. One thing to note is that cp -R /tmp/app1 /var/www/app1 should be cp -RT /tmp/app1 /var/www/app1, otherwise app1's new location is /var/www/app1/app1.