Ipv6 not working

I’ve setup a CoreOs host with a public routable /64 ipv6 Subnet.
From within the host i have a global ipv6 address which is working fine but i can’t get access to ipv6 hosts from within any docker container.

These are my current settings to my docker deamon:

DOCKER_OPTS=--dns --dns --ipv6 --fixed-cidr-v6='2a03:4000:6:e0d0::/64'

Adding --ip-forward=false or the google ipv6 DNS servers doesn’t help either

Only if I add --net=host it works so I think I’m missing a fundamental network configuration part.

ifconfig from within an ubuntu:latest docker container shows this:

eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:02  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
          inet6 addr: 2a03:4000:6:e0d0:0:242:ac11:2/64 Scope:Global
          RX packets:19 errors:0 dropped:4 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1565 (1.5 KB)  TX bytes:676 (676.0 B)

What special configuration do i need to do to get this working?

You might need to pass in a smaller subset of your /64 in to the --fixed-cidr option, set sysctl net.ipv6.conf.default.forwarding=1, sysctl net.ipv6.conf.default.all=1, and run an ndb proxy announcing your smaller subnet.

Alternatively, you could give docker its own /64 subnet (not the same subnet that your eth0 is in), and set up routes for that subnet to your docker host.

Way more details on getting ipv6 set up can be found here: https://docs.docker.com/engine/userguide/networking/default_network/ipv6/

I’ve read the docs but for me this part:

With the --fixed-cidr-v6 parameter set Docker will add a new route to the routing table. Further IPv6 routing will be enabled (you may prevent this by starting Docker daemon with --ip-forward=false):

$ ip -6 route add 2001:db8:1::/64 dev docker0
$ sysctl net.ipv6.conf.default.forwarding=1
$ sysctl net.ipv6.conf.all.forwarding=1

sounds like docker will setup these options for me? and setting them manually doesn’t seem to make a difference

the last part of the first paragraph seems to be an optional info

Often servers or virtual machines get a /64 IPv6 subnet assigned (e.g. 2001:db8:23:42::/64). In this case you can split it up further and provide Docker a /80 subnet while using a separate /80 subnet for other applications on the host

and the next part about the ndp proxy sounds like it is only needed if i don’t have complete /64 net

If your Docker host is only part of an IPv6 subnet but has not got an IPv6 subnet assigned you can use NDP proxying to connect your containers via IPv6 to the internet.

so all in all i think i did everything stated in the docs :sweat:

What happens if you look at your routing table and look at the current value of those sysctl values?

The sysctl settings are active, even after a reboot. My routes are:

route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface UG 0 0 0 eth0 UG 1024 0 0 eth0 U 0 0 0 docker0 U 0 0 0 eth0 UH 1024 0 0 eth0

ip -6 route show
2a03:4000:6:e0d0::/64 dev eth0 proto kernel metric 256
2a03:4000:6:e0d0::/64 dev docker0 metric 1024
fe80::/64 dev eth0 proto kernel metric 256
fe80::/64 dev docker0 proto kernel metric 256
default via fe80::1 dev eth0 proto static metric 1024

@programmerq do you know if there is anything obvious wrong with that routing table? I would guess it could be a problem that there are equal routes for eth0 and docker0?

I ran into a similar problem, maybe it is of help:

My host has a public routable /64 ipv6 subnet assigned. I’ve added

DOCKER_OPTS="--dns --dns --fixed-cidr-v6=2607:5300:60:a7bc:2::/80 --ipv6"

to my docker deamon settings to assign a /80 subnet to Docker. The docker container ended up with

eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:03
          inet addr:  Bcast:  Mask:
          inet6 addr: 2607:5300:60:a7bc:2:242:ac11:3/80 Scope:Global
          inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
          RX packets:7 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:738 (738.0 B)  TX bytes:508 (508.0 B)

with 2607:5300:60:a7bc:2:242:ac11:3 being pingable from host machine, but the container not being able to reach outside world. I finally gave the NDP proxy you mentioned above a try, and it did the trick for me:

$ sysctl net.ipv6.conf.eth0.proxy_ndp=1


$ ip -6 neigh add proxy 2607:5300:60:a7bc:2:242:ac11:3 dev eth0

Hope that helps,

Thx for the input but a solution where i have to add each ipv6 address by hand is a no go for me :sob:

Fair enough. Maybe using a NDP proxy daemon (ndppd) like mentioned in the documentation could do it as workaround:

Linux have a limited support for proxying Neighbor Solicitation
messages by simply answering to any messages where the target IP
can be found in the host’s neighbor proxy table. To make this work
you need to enable “proxy_ndp”, and then add each single host to the
neighbor proxy table by typing something like:

 ip -6 neigh add proxy <ip> dev <if>

Unfortunately, it doesn’t support listing proxies, and as I said,
only individual IPs are supported. No subnets.

‘ndppd’ solves this by listening for Neighbor Solicitation messages
on an interface, then query the internal interfaces for that target
IP before finally sending a Neighbor Advertisement message.

You can create rules to query one interface for one subnet, and
another interface for another. ‘ndppd’ can even respond directly to
Neighbor Solicitation messages without querying anything, should you
need that.

I have used ndppd for testing, but it is less than ideal in my opinion. I’ve noticed a few seconds delay before newly created containers seem to have good ipv6 connectivity. This causes things to fail since not all packets seem to get through. Maybe I simply didn’t have ndppd configured quite right.

The best solution, in my opinion will be to give your docker daemon an entire /64 subnet, and your docker host will act as a router for that. If you can get a /48, that seems to be a good clean solution.

I finally made it work without using a NDP proxy: The solution for me was to use a second /64 and have it routed to my host machine IP - so basically the solution @programmerq lined out in his initial response:

So i guess if you only have a single /64 routed, you’ll need to live with NDP proxies.