Docker Community Forums

Share and learn in the Docker community.

SSL for docker apps


I am new with docker and I am looking for some simple guide How to setup ssl on any docker container I install from docker hub.

I installed docker on my local server Ubuntu server 20.04.1 and Portainer so I can manage the containers much easy, but I want all of my apps to have https

I tried with stunnel, but seem much more complicated for my skills.

I am going to use all the application locally only.

Thank you

SSL is a protocol and each application has its own way of SSL/TLS configuration, normally you should have or generate a private key and use to generate an SSL certificate and combine it with the SSL certificate in the app configuration file.
So you should state which application you use.

Dashmachine for example. They might have their own way but I am sure, if I understand the basics of few of them would be easy to get them all with https

No where in documentation in most of the apps is explained how to enable https that’s why I think the way should be almost the same

Thank you

If it is a web app, I would use Nginx + Letsencrypt (Certbot) to secure the communication.

I am using them locally. There are like 50 ways I read how to encrypt connection but I am new and I need some guide step by step how to use at least one of those ways. Just need something simple for local use with self-sign certificate.

Thank you

If you are using them locally, you probably don’t need any more encryption, especially when you intend to use a self-signed certificate. It is probably secure enough as is.

If you expose it publicly, Letsencrypt is arguably easier to set up than the self-signed certificate route, and their guide is pretty beginner-friendly.
Start here:


I am not using self-sign certificate because I don’t know how to setup that’s what I am looking for guide how to secure them with self-sign certificate. even inside my network I still like to have https

If you want to define several containers and also get them up and running, docker-compose is an efficient tool.

First, you need to kick things off with a config file (docker-compose.yml) that encompasses images for both Nginx and certbot.

version: ‘3’



image: nginx:1.15-alpine


– “80:80”

– “443:443”


– ./data/nginx:/etc/nginx/conf.d


image: certbot/certbot

Next, you can use this basic configuration to point incoming requests to HTTPS. Just swap in your domain name there the example URLs are found. Then, save the domain name as data/nginx/app.conf.

server {

listen 80;

server_name; location / {

 return 301 https://$host$request_uri;


}server {

listen 443 ssl;


location / {




Joining the dots

In order to validate domains, Let’s Encrypt request-response data from certbot which has to be served files via the Nginx container. This takes a parallel approach to that used by Google Search Console.

Volumes for both validation challengers and certificates need to be added as follows within docker-compose.yml:

Then to the certbot section you need to include:


Subsequently you will need to place this in data/nginx/app.conf:

location /.well-known/acme-challenge/ {

root /var/www/certbot;


Now comes the time to bring the HTTPS certificates into play. Pop this, along with its key, into port 443. Remember to swap in your domain where appropriate:

ssl_certificate /etc/letsencrypt/live/;

ssl_certificate_key /etc/letsencrypt/live/;

Finally, endow your config file with this HTTPS setup used by Let’s Encrypt to keep things consistent:

include /etc/letsencrypt/options-ssl-nginx.conf;

ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

Well thank you, but what I am trying to explain is I do not have domain so Let’s Encrypt is useless to me

Anymore brilliant ideas?

I’ve wondering if anyone even read what I am trying to do or those are some kind of replay bots here

Why do you want encryption when you only use your containers localy?

Because they are not actually at home they are in my office and access them from home via site to site VPN they are encrypted via VPN but I still want to have one more layer of encryption.

And because I want to know how to do it :slight_smile: Just for fun :slight_smile: All of that stuff developed within those containers is useless and complete waste of time and resources, but to know something new it is fun for me.

PS : Even if you have firewall between your WAN and LAN and all of your ports for inbound traffic are closed doesn’t mean that you are protected. If they can’t go inside your network doesn’t mean that they can’t send data to them. 99% of the software developed in the past 10 years are developed to send data in a form of so called telemetry. So encrypting everything inside your Local Network is not a bad idea.

Ah OK fair reason.

But as far as I know you need some kind of a domain name to be able to register certificates.

In my home environment I use a Synology NAS with docker on it. I do have a domain name and am able to manage my own DNS settings at my ISP (a-record, cname records etc). I configured for each docker container a seperate cname record. And to redirect that cname to the corresponding docker container I use the reverse proxy feature of my Synology. And I’m able to add a certificate to each CNAME url. This combination makes it possible to connect over HTTPS to all containers.

The above features are also available via other options.
Some examples: traefik , SWAG (from or NGINX all these are available as docker containers.
At the moment I’m experimenting with Traefik.

If you want to add a certificate to containers natively you need to rebuild your containers and add a certificate to each container. I you create you own containers that’s possible but If you are dependent on other container builders that will be almost impossible.

Well the way you encrypt data with Let’s encrypt has nothing to do with rebuilding the containers. Both Self-Sign and Let’s Encrypt are the same they can both be used for example to encrypt a web server the only difference are those so called annoying browser warnings nothing more. that’s why is more reliable to use Let’s Encrypt when you expose your web sever outside your local network.

Docker is build on top of the OS witch mean it has to go trough the OS in order to go out there where encryption usually takes place not inside the container I can see stunnel as a solution but I have to keep learning how to configure it right.

That’s just WAN and LAN where WAN is your OS interface and LAN is your docker internal network, if I some how place stunnel between them I can forward ports trough stunnel.

Let the fun begin :slight_smile:

Well it didn’t took me much time to figure it out. Now I encrypt everything with stunnel.

I told I am going to have much more fun then 15 min.

I used my downstream firewall pfsense to install stunnel as a server and ubuntu server where the docker is, as stunnel client and now all I have to do is add configuration to the client and the server to each port I want to encrypt and I will have HTTPS on any docker container I want.


I learned something new again.
I read the stunnel webpage to learn hwat features stunnel has. :grinning: :grinning:

For what I understand is just port forwarding ports and that way you can encrypt almost anything
Specially web servers and web apps

I am using it on blueiris security camera software to encrypt the web UI, but on windows I had no idea how to do this on linux and how I know. On windows you just installed it adding 4 lines of code and that’s it

Like this:

accept = 5000
connect = 81
cert = stunnel.pem

But on linux you should have a server also I think in order to forward ports to the client

I am actually not sure of that I am going to test it today or tomorrow.

This is actually the server configuration the client configuration should look like this

; ***************************************** Example TLS client mode services

; Encrypted HTTP proxy authenticated with a client certificate
; located in the Windows certificate store
;client = yes
;accept =
;connect =
;engineId = capi

; Encrypted HTTP proxy authenticated with a client certificate
; located in a cryptographic token
;client = yes
;accept =
;connect =
;engineId = pkcs11
;cert = pkcs11:token=MyToken;object=MyCert
;key = pkcs11:token=MyToken;object=MyKey

This is an “XY problem”.
Normally, in that case, I would use SSH, which can tunnel TCP connections securely in any way you wanted.


I just followed this guide

but I can still access on http using the port 9000 in example Portainer

ss -tunlp

tcp LISTEN 0 4096* users:((“docker-proxy”,pid=5179,fd=4))

How do I reconfigure docker-proxy to listen on the localhost

Thank you


I found the solution for that if anyone interesting I told I can share it.

But not all of the apps can work with this method:
I will use Homer for example:

Do not expose any port when you installing it.

but you should know witch port this apps need to connect and also when you install it you need the IP address
and here is the stunnel in my case configuration

accept = 60023 ### The port you want to connect from outside ###
connect = ### the IP of the container and default port ###
cert = /etc/stunnel/stunnel.pem

!!! After you create and combine the certificate as shown on the tutorial you must set “chmod 600” to the certificate It will work, but when I execute “systemctl status stunnel4.service” it warn me that for security /etc/stunnel/stunnel.pem must be set chmod 600

Seems stunnel is easier to setup, but I just couldn’t find the right basics configuration to start up

Now I will try something else because this settings to https are for inbound connection and most of the apps by default use http and they probably send data somewhere not only receive I will try to encrypt the outbound also
so if someone is thirsty to have my precious data it will get gibberish without my decryption key :slight_smile:
I think I can encrypt the outbound once on Ubuntu, but this time I will encrypt it second time on my firewall with sets of rules and stunnel

Well let’s have some more fun