I’ve setup a CoreOs host with a public routable /64 ipv6 Subnet.
From within the host i have a global ipv6 address which is working fine but i can’t get access to ipv6 hosts from within any docker container.
These are my current settings to my docker deamon:
You might need to pass in a smaller subset of your /64 in to the --fixed-cidr option, set sysctl net.ipv6.conf.default.forwarding=1, sysctl net.ipv6.conf.default.all=1, and run an ndb proxy announcing your smaller subnet.
Alternatively, you could give docker its own /64 subnet (not the same subnet that your eth0 is in), and set up routes for that subnet to your docker host.
With the --fixed-cidr-v6 parameter set Docker will add a new route to the routing table. Further IPv6 routing will be enabled (you may prevent this by starting Docker daemon with --ip-forward=false):
$ ip -6 route add 2001:db8:1::/64 dev docker0
$ sysctl net.ipv6.conf.default.forwarding=1
$ sysctl net.ipv6.conf.all.forwarding=1
sounds like docker will setup these options for me? and setting them manually doesn’t seem to make a difference
the last part of the first paragraph seems to be an optional info
Often servers or virtual machines get a /64 IPv6 subnet assigned (e.g. 2001:db8:23:42::/64). In this case you can split it up further and provide Docker a /80 subnet while using a separate /80 subnet for other applications on the host
and the next part about the ndp proxy sounds like it is only needed if i don’t have complete /64 net
If your Docker host is only part of an IPv6 subnet but has not got an IPv6 subnet assigned you can use NDP proxying to connect your containers via IPv6 to the internet.
so all in all i think i did everything stated in the docs
The sysctl settings are active, even after a reboot. My routes are:
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 188.68.52.1 0.0.0.0 UG 0 0 0 eth0
0.0.0.0 188.68.52.1 0.0.0.0 UG 1024 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
188.68.52.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0
188.68.52.1 0.0.0.0 255.255.255.255 UH 1024 0 0 eth0
ip -6 route show
2a03:4000:6:e0d0::/64 dev eth0 proto kernel metric 256
2a03:4000:6:e0d0::/64 dev docker0 metric 1024
fe80::/64 dev eth0 proto kernel metric 256
fe80::/64 dev docker0 proto kernel metric 256
default via fe80::1 dev eth0 proto static metric 1024
@programmerq do you know if there is anything obvious wrong with that routing table? I would guess it could be a problem that there are equal routes for eth0 and docker0?
with 2607:5300:60:a7bc:2:242:ac11:3 being pingable from host machine, but the container not being able to reach outside world. I finally gave the NDP proxy you mentioned above a try, and it did the trick for me:
$ sysctl net.ipv6.conf.eth0.proxy_ndp=1
and
$ ip -6 neigh add proxy 2607:5300:60:a7bc:2:242:ac11:3 dev eth0
Fair enough. Maybe using a NDP proxy daemon (ndppd) like mentioned in the documentation could do it as workaround:
Linux have a limited support for proxying Neighbor Solicitation
messages by simply answering to any messages where the target IP
can be found in the host’s neighbor proxy table. To make this work
you need to enable “proxy_ndp”, and then add each single host to the
neighbor proxy table by typing something like:
ip -6 neigh add proxy <ip> dev <if>
Unfortunately, it doesn’t support listing proxies, and as I said,
only individual IPs are supported. No subnets.
‘ndppd’ solves this by listening for Neighbor Solicitation messages
on an interface, then query the internal interfaces for that target
IP before finally sending a Neighbor Advertisement message.
You can create rules to query one interface for one subnet, and
another interface for another. ‘ndppd’ can even respond directly to
Neighbor Solicitation messages without querying anything, should you
need that.
I have used ndppd for testing, but it is less than ideal in my opinion. I’ve noticed a few seconds delay before newly created containers seem to have good ipv6 connectivity. This causes things to fail since not all packets seem to get through. Maybe I simply didn’t have ndppd configured quite right.
The best solution, in my opinion will be to give your docker daemon an entire /64 subnet, and your docker host will act as a router for that. If you can get a /48, that seems to be a good clean solution.
I finally made it work without using a NDP proxy: The solution for me was to use a second /64 and have it routed to my host machine IP - so basically the solution @programmerq lined out in his initial response:
So i guess if you only have a single /64 routed, you’ll need to live with NDP proxies.