Add nginx to Docker container

Hi,
I am trying to dockerize FusinAuth in order to deploy it into a Google Cloud Run. WIth the following DockerFile it (almost) works:

# Use FusionAuth official image as the base
FROM fusionauth/fusionauth-app:latest

# Set environment variables for Cloud Run
# These will be overridden by Cloud Run environment variables in production
ENV DATABASE_URL=${DATABASE_URL} \
    DATABASE_ROOT_USERNAME=${DATABASE_USERNAME} \
    DATABASE_ROOT_PASSWORD=${DATABASE_PASSWORD} \
    DATABASE_USERNAME=${DATABASE_USERNAME} \
    DATABASE_PASSWORD=${DATABASE_PASSWORD} \
    FUSIONAUTH_APP_MEMORY=${FUSIONAUTH_APP_MEMORY} \
    FUSIONAUTH_APP_RUNTIME_MODE=${FUSIONAUTH_APP_RUNTIME_MODE} \
    FUSIONAUTH_APP_URL=http://localhost:9011 \
    SEARCH_TYPE=database

# Expose port 9011 (FusionAuth port)
EXPOSE 9011

FusionAuth runs on port 9011, I can set the container port in the cloud run configuration to 9011, but:

Can somebody help me how to add Nginx? I tried with followings:

# Use FusionAuth official image as the base
FROM fusionauth/fusionauth-app:latest AS fusionauth

# Stage 2: Add Nginx as a reverse proxy
FROM nginx:alpine

# Copy the FusionAuth files from the first stage
COPY --from=fusionauth /usr/local/fusionauth /usr/local/fusionauth

# Copy a custom nginx configuration file to map port 8080 to 9011
COPY nginx.conf /etc/nginx/nginx.conf

# Expose port 8080 for Cloud Run
EXPOSE 8080

# Set environment variables for Cloud Run
ENV DATABASE_URL=${DATABASE_URL} \
    DATABASE_ROOT_USERNAME=${DATABASE_USERNAME} \
    DATABASE_ROOT_PASSWORD=${DATABASE_PASSWORD} \
    DATABASE_USERNAME=${DATABASE_USERNAME} \
    DATABASE_PASSWORD=${DATABASE_PASSWORD} \
    FUSIONAUTH_APP_MEMORY=${FUSIONAUTH_APP_MEMORY} \
    FUSIONAUTH_APP_RUNTIME_MODE=${FUSIONAUTH_APP_RUNTIME_MODE} \
    FUSIONAUTH_APP_URL=http://localhost:9011 \
    SEARCH_TYPE=database

# Start FusionAuth and Nginx together
#CMD ["nginx", "-g", "daemon off;"]
CMD /usr/local/fusionauth/fusionauth-app/bin/start.sh && nginx -g 'daemon off;'
#CMD ["/usr/local/fusionauth/fusionauth-app/bin/start.sh", "nginx", "-g", "'daemon off;'"]

but I get: env: can’t execute ‘bash’: No such file or directory.
Not really sure how to do. Please help.

A simpler solution would be to use a separate container for the proxy, rather than have two services running in the same one

As for the dockerfile, I’ll try just for the heck of it
Know that the final image you’re using, as the dockerfile currently stands, is based on nginx, not on fusionauth, you’re only copying two directories from it, but the rest of the image is based on the last FROM step

As a result of my second paragraph, you’re basing your image off of nginx:alpine, which does not have bash

Hi, thanks for the reply.
I have to deploy it on a Google Cloud Run instance, so I can’t have two containers.
Based on your comment, I added FROM fusionauth before the CMD, now it runs, but localhost:8080 still does not work.

Then you have not understood my message

FROM someimage AS stepone
...

FROM someimage AS steptwo
...

FROM someimage     # FINAL IMAGE

When using a multi-platform build, only the final image is saved, and all the others are discarded

This allows you to only keep some files from those images, and save up space

If you just add FROM before your CMD line, then it’s practically the same as your entire dockerfile looking like this:

FROM fusionauth
CMD ...

Now, as for what you need, since you need two services running on the same image, you may want to use something with systemd, so that you can manage multiple processes

Currently, with your CMD being /usr/local/fusionauth/fusionauth-app/bin/start.sh && nginx -g 'daemon off;'

This means the nginx command will only run after the start.sh script ends - And the script ends only when the fusionauth service ends

I understand
but have no idea how to do :confused:

Have you checked this out? Google may allow multiple containers, if this is possible for you it’d save you a lot of trouble

As for achieving that in a single container, I have yet to do something like that myself, so I don’t know

There seems to be some sort of bug with this thread, it does not appear on the latest page, nor does it send notifications, therefore I’m tagging @rimelek and @meyay so that the issue does not get lost

1 Like

I think this involves Kubernetes, didn’t think about, could be a nicer solution.

I don’t know what FusionAuth is, but usually run each service isolated from eachother as @deanayalon suggested. It may seem difficult, but running a process manager in the container and configure it could be difficult too if you are not familiar with containers which seem to be the case, so I would recommend some links


Recommended links to learn the basics and concepts:


Here is m tutorial about Linux signals including using systemd in a container which is not recommended and the hardest among all solutions. It wasn’t designed for containers originally so if you need a process manager, use Supervisor or S6-init for example.

Multi-stage, not multi-platform, but I assume it was not intentionally written as multi-platform :slight_smile:

Where you deploy your containers is less important. If you can’t run two services in separate containers using two different images, Kubernetes is definitely not for you yet, so first you should learn the concepts.

Update: Forget my last sentence, I didn’t know Google Cloud Run ran Kubernetes.

Topics can be muted which you probably done accidentally. The topic works normally to me.

1 Like

Oops, yes, I’m still stuck on my own multi-platform error lol

Please do not embed nginx into the container. Google Cloud Run indeed created Kubernetes Deployments. And the correct approach is to use a sidecar container, like described in the docs:

2 Likes

So I should have checked it before writing my quoted statement :slight_smile: I updated my post

Thanks a lot to everyone, I will go for the “sidecar container”, as suggested by @meyay.

If you want to help :slight_smile: