Running multiple docker containers with UFW and "--iptables=false"

Recently I found this article: The dangers of UFW + Docker
While I thought I was on the save side all the time, it turned out I was not…
So I have enabled DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --iptables=false" and DEFAULT_FORWARD_POLICY="ACCEPT" and tried to setup a WordPress environment for testing with multiple containers:

  • nginx-proxy
  • nginx
  • wordpress
  • mariadb

Now when I don’t run the nginx-proxy with the “net: host” option set, X-Forwarded-For IP-addresses are not being logged in the accesslog (otherwise, only the docker0 172.17.0.1 address is logged).

Next the WordPress-container is unable to get access to the internet, e.g. for checking api.wordpress.org (66.155.40.202), resulting in an error in the WP webGUI.

Here is the docker-compose.yml I’m using:

proxy:
  image: jwilder/nginx-proxy
  container_name: proxy
  net: host
  volumes:
    - ./nginx-proxy/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf:ro
    - ./nginx-proxy/certs/:/etc/nginx/certs
    - /var/run/docker.sock:/tmp/docker.sock
  ports:
    - "80:80"
    - "443:443"
  restart: always

web:
  image: nginx:latest
  container_name: web
  links:
    - wordpress:wordpress
    #- db:mysql
  volumes:
    - ./nginx/nginx.conf:/etc/nginx/nginx.conf
    - ./nginx/sites-enabled/example.com.conf:/etc/nginx/sites-enabled/example.com.conf
    - ./nginx/includes/:/etc/nginx/includes/
    - ./nginx/ssl/:/etc/nginx/ssl/
    - ./nginx/cache/:/usr/share/nginx/cache
    - ./nginx/logs:/var/log/nginx
  volumes_from:
    - wordpress
  environment:
    VIRTUAL_HOST: example.com
    VIRTUAL_PORT: 443
    VIRTUAL_PROTO: https
  extra_hosts:
    - "api.wordpress.org:66.155.40.202"
  restart: always

wordpress:
  image: wordpress:fpm
  container_name: wordpress
  links:
    - db:mysql
  volumes:
    - ./wordpress/html/:/var/www/html
    - ./wordpress/www.conf:/usr/local/etc/php-fpm.d/www.conf
    - ./wordpress/uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
  environment:
    - WORDPRESS_DB_PASSWORD=p4ssw0rd
  extra_hosts:
    - "api.wordpress.org:66.155.40.202"
  restart: always

db:
  image: mariadb
  container_name: db
  volumes:
    - ./mysql/:/etc/mysql/conf.d
    - ./mysql_data/:/var/lib/mysql
  environment:
    MYSQL_ROOT_PASSWORD: p4ssw0rd
  restart: always

What am I doing wrong or overlooking?
Setting “net: host” for every container feels not like a right solution, but what is…?
Any hints / clues / solutions welcome. TIA!

Docker sets up port forwards using iptables. What are you trying to accomplish by doing --iptables=false? That’ll disable all the NAT stuff required to make your containers be able to route out.

Thanks Jeff. That was quick!

I use “UFW” on Ubuntu14.04 to add / set firewall rules, but as the article states, Docker tampers iptables before UFW comes in play.
I thought I only allowed access to port 80 and 443 by adding specific rules with UFW, until I saw completely different IP-addresses appear too in the logging when running without the iptables=false and the forward-policy set .

So it looks like to restrict access to a public container e.g. at DigitialOcean by IP-address and port(s), UFW is not the right tool to use, but what is the best practice then?

Which article are you following? Off the top of my head, I’m not aware of any problems using ufw and docker on the same system.

The one I mentioned in the first line of the first post: http://blog.viktorpetersson.com/post/101707677489/the-dangers-of-ufw-docker

Ah, sorry I missed that.

There is this note in the official docs about using UFW and docker together on an ubuntu machine:

https://docs.docker.com/engine/installation/linux/ubuntulinux/#enable-ufw-forwarding

I removed the DOCKER_OPTS line from /etc/default/docker and simply rebooted. All containers came up automagically.
Next I ran a portscan with nmap from an unallowed host: no ports open, not 80 nor 443. And from an allowed host 22 / 80 / 443 are open.

Looks like that solved the issue. Thanks, Jeff!

FYI - it only works with the nginx-proxy in front. When dropping this container and use nginx internet-facing, port 80 and 443 are world accessible again. :frowning:

Not as solved as I wished…
Using iptables-save and comparing results, it seems the nginx-container is setting these lines in IPtables:

-A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 443 -j MASQUERADE
-A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 80 -j MASQUERADE

-A DOCKER ! -i docker0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.17.0.4:443
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.4:80

-A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT
-A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT

The jwilder/nginx-proxy is not setting or adding any (because of “net: host” ?).

Using --net=host effectively tells docker “hey don’t do any containerization/isolation of the network stuff”, and the processes in that container just see your host’s network interfaces.

OK - thanks for the explanation.
But I can’t use --net=host / “net: host” for the nginx-container when using links. It will end up with this error:

ERROR: Conflicting options: host type networking can't be used with links. This would result in undefined behavior

So how do I / would you deal with that situation?

Right-- when you use --net=host, you are effectively disabling all of docker’s networking features for that container. You shouldn’t need to do that for the nginx proxy.

Sorry if this getting a drag…
But when I disable “net: host” for nginx-proxy, the ufw-rules I have set are overruled by Docker altering IPtables. Rules like this have lost its effect:

ufw allow in on eth0 from 91.x.y.z to any port 443 proto tcp

Ports 80/443 are no longer blocked and we’re back to square one.
Docker adds ACCEPT-rules way BEFORE the “ufw-user-input” lines, allowing traffic to port 80 & 443 before it is being blocked later in the chain.

I’ve created a gist of iptables-save to illustrate the behavior above:

This is the result of running without --net=host set, but with 6 ufw-rules added to allow 22/80/443 from 2 specific IP-addresses.
Docker rules manipulating access to ports 80 & 443 are set at line 11, 12, 14, 15, 80 & 81, the user added lines are set at 126-131.

As far as I can summarize the situation, it is basically like this, I quote:

the default behavior in Docker is desirable, as you want to expose 80 and 443 to the world.

Source: Redirecting…

So you will not be able to block Docker exposed public ports using UFW, right?
Maybe head over to GitHub/:whale: and create a new issue for this…

Okay,

I installed docker and ufw on an ubuntu 14.04 machine.

Per the note on ufw in the docs here: https://docs.docker.com/engine/installation/linux/ubuntulinux/#enable-ufw-forwarding, I edited /etc/default/ufw to have DEFAULT_FORWARD_POLICY="ACCEPT"

I then ran a container with a published port:

docker run -d -p 80:80 --name nginx nginx:alpine

I was able to run curl against it both from on that machine, and from my local workstation with the same result

$ curl <ipaddressofmachine>
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

I did this all without --net=host. ufw and docker coexist quite well as long as you remember to turn on that default forwarding policy. You should be able to publish a port for any container (including the jwilder/nginx-proxy image)

Thanks for trying out that situation, Jeff, but I don’t think I have made myself completely clear.
I think I know how to set up things like this, but what I need is to grant access to that nginx ONLY from specific IP addresses, so to default DENY access to ports 80 & 443 from the rest of the world.
In the end I need a setup that only allows access from IP address 91.x.y.z an 83.z.y.x to ports 80 & 443.

One should think that using UFW is the right tool for that job, but not if Docker tampers these rules “invisibly”.
Running ufw status verbose does not show the Docker added rules higher up in the IPtables-chain (iptables-save however does) that open up ports 80 & 443 anyway.

The steps described in this article: “Running Docker behind the ufw firewall” seems to do the trick.
So you need an extra third precaution:

  1. DEFAULT_FORWARD_POLICY="ACCEPT"
  2. DOCKER_OPTS="--iptables=false"
  3. Configure NAT in iptables

Now blocking / denying with UFW seems to work as expected.

Downside of this solution is that nginx access-logging only logs the docker0 address 172.17.0.1 and no external addresses (even when X-Forwarded-For is set)… :cry:

1 Like

Oh I see what you mean.

This is true for any iptables firewall approach really. The reason is that docker sets up NAT rules, so the FORWARDING chain is where it happens.

This stackoverflow discusses that in a little bit more depth than I can: http://stackoverflow.com/questions/30769829/docker-ignores-iptable-rules-when-using-p-portport

As for how to implement that type of solution with ufw, I couldn’t say for sure, as my ufw experience is fairly limited.

I imagine you could tell ufw to set up some rules that happen on the FORWARD chain based on source ips and whatnot.

How do we go about adding this to https://docs.docker.com/engine/installation/linux/linux-postinstall/#ip-forwarding-problems