I’m currently running a single instance of Docker on a single VM. I’ve assigned Docker an IPv6 CIDR because I want to support IPv4 and IPv6 connections. Here is my daemon.json
{
"tls": true,
"tlsverify": true,
"tlscacert": "/etc/docker/ca.pem",
"tlscert": "/etc/docker/server.crt",
"tlskey": "/etc/docker/server-key.pem",
"ipv6": true,
"fixed-cidr-v6": "2001:19f0:6001:1c12::/80",
"hosts": ["127.0.0.1:2376", "10.10.6.10:2376", "fd://"]
}
So I have my HAProxy container, it has 80/443 exposed, and it has option forwardfor
set so my backend servers get the client’s real IP address. This works great … for IPv4. The DNS for my websites is pointed to the public address of the box itself. Docker/IPv4 forwards ports directly between the box’s public IP and the HAProxy container. However for the public IPv6 address, it translates it to IPv4! Now in my nginx/apchee logs, I just get the internal IPv4 address of the Docker bridge 172.17.0.1
.
Now I realize, I can just point the DNS AAAA record to the IPv6 global address of my HAProxy container, open up 80/443 on my ipv6 firewall and now I’ll get the client’s real IP address forwarded on. The trouble is, for that to be viable, I need a static IPv6 address on that container using the default network. I have no idea how to do this. If I use --ip6 2001:19f0:6001:1c12::a100:a100:a100:1
, I get the error User specified IP address is supported on user defined networks only
.
I’ve already built a full provisioning tool in Ruby called Bee2, which just simply creates container links and does everything on the default network. Is there no way to pin a given IPv6 address to a given container, without using a user defined network? Why are IPv6 ports not bound directly into the container like they are with IPv4?
All I really want are the client’s real IPv6 addresses in the Apache/Nginx logs in the simplest way possible. What are my options at this point?