HTTPS configuration error

Hey !

I have an Angular application that runs in Docker with an nginx reverse proxy, a container with my back end code and a container with my angular ng serve.

I need to run the application in HTTPS, with no success. I think I may be doing something wrong but I don’t know where I’m failing.

The application is an Angular 5 application with a .net core back end.

The code is organized this way :

| .env
| docker-compose.yml
| Front/
|   package.json
|   Dockerfile
|   src/
| Back/
|   Web/
|     SaasManager.Web/
|       Dockerfile

You can find the content of my root docker-compose.yml here

In the Front/Dockerfile file, I have :

FROM node:8.9
WORKDIR /var/www/data
CMD npm run docker

npm run docker is defined in my package.json and runs the following command :
yarn && ./node_modules/@angular/cli/bin/ng serve -dev --public-host host_url:443 --host --disable-host-check -ec

And in the Back/Web/SaasManager.Web/Dockerfile file, I have the following :

FROM microsoft/dotnet
CMD cd /app/Web/SaasManager.Web && dotnet publish -c Release -o ./out && dotnet ./out/SaasManager.Web.dll

When I run docker-compose up -d, I try to access to https://my_url:443 in Chrome but I get a CONNECTION_REFUSED error. What did I do wrong ?

I hope I didn’t miss something and wish you all a nice day

in your front dockerfile you didn’t expose 443.

and what port on the docker host should this map to? 443?

I’m sorry, this is my first Docker project so sorry if I don’t get everything right immediatly. When you say that, you mean the port between the containers ?

think of a container as a sphere, around a bunch of code surrounded by a firewall. the code can talk OUTward,
but without some info, nothing can talk INward

so, your two parts, how do they talk to each other, and how does someone on the outside talk to your app?

it sounds like your front end app is a web page, with an angular app running in it. on some port (443 i guess)

and the front end accesses data in the backend thru some apis, database or whatever.

the bane of application install has been that customers/users continually fiddle with the platform, the parms, etc… the support is a nightmare…

imagine if I could install my app ONE WAY, and everyone can USE that without changing it on any system…

enter docker.

so, a docker container runs on some system which has apps already running… only ONE app can use port 443. but you might have 2 or more… the docker design allows you to map the container defined port (443) onto the host at some free port, 8443 for example. the container doesn’t KNOW you did this, and has NO changes inside… in docker run that is -p 8443:443 (host_port: container_port)
YOU access the port at 8443, docker (networking) does the rest
your dockerfile says expose 4200, what is that?

now on to your data container. it has to ‘expose’ some ports too, for the front end app to access.
your dockerfile says expose 5001. the docker HOST doesn’t need to know about this port, cause its only used by the front end container… but the app container needs a tiny bit more info… like, what is the IP address of the data container…

there are a bunch of different strategies for ip address discovery.

  1. start the data container, then docker inspect it ti read the configured address, then use --add-host name:ip to the app container for the data container name/ip address
  2. start the data container and --link to it from the app container
  3. use networks in the dockercompose.yaml to be able to use names between the parts…

you would have to do the same config design things with 2 real systems…

This is the port I use locally to access my app in HTTP. If I understand, the virtual port is what till be mapped to the Docker port (443 in my case) ?

This was a very very interesting answer, thank you for that.

well, httpS requires a certificate usually to make it work…

you can try

docker run -p 443:4200

to map the host 443 onto the container 4200

I use a let’s encrypt certificate, is it bad ?

I tried to do as you said, to map the port 443 to 8443 but I always get the connection closed error. I EXPOSE 8443 and use the 443:8443 mapping. I’m sure I’m missing something but can’t put my finger on it.

your app is listening on 443 in the container…

so, your port mapping should be the other way.

expose 443

then -p 8443:443

the first number is the host port, the second number is the container port

Sorry this is just me being bad, but how can I use docker run -p 8443:443 with docker-compose ? I specified the ports in my config but this has not changed the error. ?

I exposed port 443 and used 8443:443 as the ports in my docker-compose.yml

docker compose help says

  - 8443:443


also see compose ‘expose’ for the service container port 5001

Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified

That’s what I have in my configuration but I always get ERR_CONNECTION_CLOSED.

I can update the docker-compose.yml if you want

so… it sounds like the node app in the container is having trouble…

i personally would do this without compose first to make sure everything works. (and it makes sure you know how all this works)…
docker ps will show u running container ids
anyhow… you should be able to docker exec (front_end_containerid) ps -ef
to see what is running, and docker attach containerid to give u a commandline to explore what is going on

maybe the node app didn’t start?

Executing docker exec xxx ps -ef for the front gives me (amongst other lines) this :
root 15 5 0 13:47 ? 00:00:00 sh -c yarn && ./node_modules/@angular/cli/bin/ng serve -dev --public-host my_host:443 --host --disable-host-check -ec

I think it means that the angular app started and is running. According to the logs, there is no error while running the yarn or npm docker commands.

and docker inspect containerid should show the expose and port mapping

I don’t know how to debug the node app…

i would put curl on the container as part of the build and use docker exec to try to pull content from inside the container on port 443, make sure that works

Hmmm. inspecting the front end container gives :

"Ports": {
     "8443/tcp": "null"

did u put expose 443 in the front end dockerfile and rebuild that image?

Yes, I did remove and rebuild the image

and if you do docker inspect on the image do you see the expose?

Well I do but this is not the same as my Dockerfile. I specified EXPOSE 443 yet I see :

"ExposedPorts": { 
    "8443/tcp": {}

EDIT : nevermind, the port 443 is exposed, my bad

but not mapped during run time with the compose front end info

- 8443:443