Hi,
I am trying to dockerize FusinAuth in order to deploy it into a Google Cloud Run. WIth the following DockerFile it (almost) works:
# Use FusionAuth official image as the base
FROM fusionauth/fusionauth-app:latest
# Set environment variables for Cloud Run
# These will be overridden by Cloud Run environment variables in production
ENV DATABASE_URL=${DATABASE_URL} \
DATABASE_ROOT_USERNAME=${DATABASE_USERNAME} \
DATABASE_ROOT_PASSWORD=${DATABASE_PASSWORD} \
DATABASE_USERNAME=${DATABASE_USERNAME} \
DATABASE_PASSWORD=${DATABASE_PASSWORD} \
FUSIONAUTH_APP_MEMORY=${FUSIONAUTH_APP_MEMORY} \
FUSIONAUTH_APP_RUNTIME_MODE=${FUSIONAUTH_APP_RUNTIME_MODE} \
FUSIONAUTH_APP_URL=http://localhost:9011 \
SEARCH_TYPE=database
# Expose port 9011 (FusionAuth port)
EXPOSE 9011
FusionAuth runs on port 9011, I can set the container port in the cloud run configuration to 9011, but:
Can somebody help me how to add Nginx? I tried with followings:
# Use FusionAuth official image as the base
FROM fusionauth/fusionauth-app:latest AS fusionauth
# Stage 2: Add Nginx as a reverse proxy
FROM nginx:alpine
# Copy the FusionAuth files from the first stage
COPY --from=fusionauth /usr/local/fusionauth /usr/local/fusionauth
# Copy a custom nginx configuration file to map port 8080 to 9011
COPY nginx.conf /etc/nginx/nginx.conf
# Expose port 8080 for Cloud Run
EXPOSE 8080
# Set environment variables for Cloud Run
ENV DATABASE_URL=${DATABASE_URL} \
DATABASE_ROOT_USERNAME=${DATABASE_USERNAME} \
DATABASE_ROOT_PASSWORD=${DATABASE_PASSWORD} \
DATABASE_USERNAME=${DATABASE_USERNAME} \
DATABASE_PASSWORD=${DATABASE_PASSWORD} \
FUSIONAUTH_APP_MEMORY=${FUSIONAUTH_APP_MEMORY} \
FUSIONAUTH_APP_RUNTIME_MODE=${FUSIONAUTH_APP_RUNTIME_MODE} \
FUSIONAUTH_APP_URL=http://localhost:9011 \
SEARCH_TYPE=database
# Start FusionAuth and Nginx together
#CMD ["nginx", "-g", "daemon off;"]
CMD /usr/local/fusionauth/fusionauth-app/bin/start.sh && nginx -g 'daemon off;'
#CMD ["/usr/local/fusionauth/fusionauth-app/bin/start.sh", "nginx", "-g", "'daemon off;'"]
but I get: env: canât execute âbashâ: No such file or directory.
Not really sure how to do. Please help.
A simpler solution would be to use a separate container for the proxy, rather than have two services running in the same one
As for the dockerfile, Iâll try just for the heck of it
Know that the final image youâre using, as the dockerfile currently stands, is based on nginx, not on fusionauth, youâre only copying two directories from it, but the rest of the image is based on the last FROM step
Hi, thanks for the reply.
I have to deploy it on a Google Cloud Run instance, so I canât have two containers.
Based on your comment, I added FROM fusionauth before the CMD, now it runs, but localhost:8080 still does not work.
FROM someimage AS stepone
...
FROM someimage AS steptwo
...
FROM someimage # FINAL IMAGE
When using a multi-platform build, only the final image is saved, and all the others are discarded
This allows you to only keep some files from those images, and save up space
If you just add FROM before your CMD line, then itâs practically the same as your entire dockerfile looking like this:
FROM fusionauth
CMD ...
Now, as for what you need, since you need two services running on the same image, you may want to use something with systemd, so that you can manage multiple processes
Currently, with your CMD being /usr/local/fusionauth/fusionauth-app/bin/start.sh && nginx -g 'daemon off;'
This means the nginx command will only run after the start.sh script ends - And the script ends only when the fusionauth service ends
Have you checked this out? Google may allow multiple containers, if this is possible for you itâd save you a lot of trouble
As for achieving that in a single container, I have yet to do something like that myself, so I donât know
There seems to be some sort of bug with this thread, it does not appear on the latest page, nor does it send notifications, therefore Iâm tagging @rimelek and @meyay so that the issue does not get lost
I donât know what FusionAuth is, but usually run each service isolated from eachother as @deanayalon suggested. It may seem difficult, but running a process manager in the container and configure it could be difficult too if you are not familiar with containers which seem to be the case, so I would recommend some links
Recommended links to learn the basics and concepts:
Here is m tutorial about Linux signals including using systemd in a container which is not recommended and the hardest among all solutions. It wasnât designed for containers originally so if you need a process manager, use Supervisor or S6-init for example.
Multi-stage, not multi-platform, but I assume it was not intentionally written as multi-platform
Where you deploy your containers is less important. If you canât run two services in separate containers using two different images, Kubernetes is definitely not for you yet, so first you should learn the concepts.
Update: Forget my last sentence, I didnât know Google Cloud Run ran Kubernetes.
Please do not embed nginx into the container. Google Cloud Run indeed created Kubernetes Deployments. And the correct approach is to use a sidecar container, like described in the docs: