Docker Community Forums

Share and learn in the Docker community.

Connectivity to/from IPv6 container

I have a Docker host running on a CentOS7 VM with dual-stacked interface having both IPv4 and IPv6 addresses. For example:

192.168.1.200/24
fd00:192:168:1::200/64

I have enabled IPv6 on the Docker host, and created a user defined bridge network with an IPv6 subnet. For example:

docker network create
–ipv6
–driver=bridge
–subnet=fd00:172:16:1::/64
brv6-172:16:1::

I have created a CentOS7 container on this user defined bridge network and assigned it IPv6 address fd00:172:16:1::101.

From the Docker host, I can successfully ssh to the container via fd00:172:16:1::101. From the container, I can successfully ssh to the IPv6 address of the Docker host (fd00:192:168:1::200).

However, I have a standard (not a Docker host) CentOS7 VM on the same network as Docker host with IPs:

192.168.1.14/24
fd00:192:168:1::14

On this host, I have setup a static route for the user defined bridge network via the Docker host:

route -A inet6 add fd00:172:16:1::/64 gw fd00:192:168:1::200

Using the ping6 utility, I can confirm communication between this host and the Docker container. That is:

(CentOS VM) fd00:192:168:1::14 -> ping6 -> (Docker CentOS container) fd00:172:16:1::101 is successful

The other direction is also successful:

(Docker CentOS container) fd00:172:16:1::101 -> ping6 -> (CentOS VM) fd00:192:168:1::14 is successful

The problem comes when I try to ssh to/from the container, I get a permission denied error:

From the CentOS VM to Docker CentOS Container:
ssh: connect to host fd00:172:16:1::101 port 22: Permission denied

From the Docker CentOS Container to the CentOS VM:
ssh: connect to host fd00:192:168:1::14 port 22: Permission denied

I suspect the “Permission denied” is offering some clue, but I have not found it yet. I am looking for any assistance in achieving the desired connectivity.

Thanks,
Greg

I have resolved this problem by using https://github.com/robbertkl/docker-ipv6nat

Greg

In a nutshell, this is what we will do:

Create EC2 instances with an ENI attached to it.
Re-configure IPv6 addressing on the instance and install Docker.
Run a couple of Containers using only IPv6.
Create EC2 instances with an ENI attached to it
We will use the AWS CLI create-network-interface to create an ENI with a primary IPv6 address and also a contiguous block of IPv6 addresses for each one of our instances. These addresses will come from a known Subnet. We will also apply a Security Group to our ENI.

Subnet, Security Group and ENI
If you don’t have a VPC with IPv6 support already, please take a look at Getting Started with IPv6 for Amazon VPC, so you can store the ID of theSubnet and Security Group in the variables subnetId and sgId.

subnetId=subnet-09a931730fa9exxxxsgId=sg-0eaf439572982yyyy
For instance-1 we will reserve addresses ::1:1, ::8, ::9, ::a and ::b. I have removed the subnet prefix for the ease of reading. The first address will be for the instance, and the other four will make the /126 we need for the linux bridge the containers will be connected to.

2600:1f18:47b:ca03::1:12600:1f18:47b:ca03::82600:1f18:47b:ca03::92600:1f18:47b:ca03::a2600:1f18:47b:ca03::b
For our instance-2 we will reserve addresses ::2:2, ::c, ::d, ::e and ::f.

2600:1f18:47b:ca03::2:22600:1f18:47b:ca03::c2600:1f18:47b:ca03::d2600:1f18:47b:ca03::e2600:1f18:47b:ca03::f
With all this info we execute the create-network-interface command. However, we also need to store the ID of ENI for the following operations, so we query NetworkInterface.NetworkInterfaceId and store the returned value in eni1 for instance-1.

eni1=aws ec2 create-network-interface \ --subnet-id $subnetId \ --description "My IPv6 ENI 1" \ --groups $sgId \ --ipv6-addresses \ Ipv6Address=2600:1f18:47b:ca03::1:1 \ Ipv6Address=2600:1f18:47b:ca03::8 \ Ipv6Address=2600:1f18:47b:ca03::9 \ Ipv6Address=2600:1f18:47b:ca03::a \ Ipv6Address=2600:1f18:47b:ca03::b \ --query 'NetworkInterface.NetworkInterfaceId' \ --output text
You can check the value returned as follows.

$ echo $eni1eni-08ba7c2f50a22a160
Repeat for the second ENI.

eni2=`aws ec2 create-network-interface \ --subnet-id $subnetId \ --description “My IPv6 ENI 2” \ --groups $sgId \ --ipv6-ad

Verifyig IPv6 with Docker involves the following steps:

Step 3.2.1: Enable ipv6 support for Docker

In the simplest term, the first step is to enable IPv6 on Docker on Linux hosts. Please refer to “this link” [5]:

Edit /etc/docker/daemon.json
Set the ipv6 key to true.
{{{ “ipv6”: true }}}
Save the file.

Step 3.2.1.1: Set up IPv6 addressing for Docker in daemon.json

If you need IPv6 support for Docker containers, you need to enable the option on the Docker daemon daemon.json and reload its configuration, before creating any IPv6 networks or assigning containers IPv6 addresses.

When you create your network, you can specify the --ipv6 flag to enable IPv6. You can’t selectively disable IPv6 support on the default bridge network.

Step 3.2.1.2: Enable forwarding from Docker containers to the outside world

By default, traffic from containers connected to the default bridge network is not forwarded to the outside world. To enable forwarding, you need to change two settings. These are not Docker commands and they affect the Docker host’s kernel.

Setting 1: Configure the Linux kernel to allow IP forwarding:
sysctl net.ipv4.conf.all.forwarding=1 Setting 2: Change the policy for the iptables FORWARD policy from DROP to ACCEPT. sudo iptables -P FORWARD ACCEPT
These settings do not persist across a reboot, so you may need to add them to a start-up script.

Step 3.2.1.3: Use the default bridge network

The default bridge network is considered a legacy detail of Docker and is not recommended for production use. Configuring it is a manual operation, and it has technical shortcomings.

Step 3.2.1.4: Connect a container to the default bridge network

If you do not specify a network using the --network flag, and you do specify a network driver, your container is connected to the default bridge network by default. Containers connected to the default bridge network can communicate, but only by IP address, unless they are linked using the legacy --link flag.

Step 3.2.1.5: Configure the default bridge network

To configure the default bridge network, you specify options in daemon.json. Here is an example of daemon.json with several options specified. Only specify the settings you need to customize.

{
“bip”: “192.168.1.5/24”,
“fixed-cidr”: “192.168.1.5/25”,
“fixed-cidr-v6”: “2001:db8::/64”,
“mtu”: 1500,
“default-gateway”: “10.20.1.1”,
“default-gateway-v6”: “2001:db8:abcd::89”,
“dns”: [“10.20.1.2”,“10.20.1.3”]
}
Restart Docker for the changes to take effect.

Step 3.2.1.6: Use IPv6 with the default bridge network

If you configure Docker for IPv6 support (see Step 2.1.1), the default bridge network is also configured for IPv6 automatically. Unlike user-defined bridges, you cannot selectively disable IPv6 on the default bridge.

Step 3.2.1.7: Reload the Docker configuration file

$ systemctl reload docker
Step 3.2.1.8: You can now create networks with the --ipv6 flag and assign containers IPv6 addresses.

Step 3.2.1.9: Verify your host and docker networks

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ea76bd4694a8 registry:2 “/entrypoint.sh /e…” x months ago Up y months 0.0.0.0:4000->5000/tcp registry

$ docker network ls
NETWORK ID NAME DRIVER SCOPE
b9e92f9a8390 bridge bridge local
74160ae686b9 host host local
898fbb0a0c83 my_bridge bridge local
57ac095fdaab none null local
Step 3.2.1.10: Edit /etc/docker/daemon.json and set the ipv6 key to true.

{
“ipv6”: true
}
Save the file.

Step 3.2.1.11: Reload the Docker configuration file.

$ sudo systemctl reload docker
Step 3.2.1.12: You can now create networks with the --ipv6 flag and assign containers IPv6 addresses using the --ip6 flag.

$ sudo docker network create --ipv6 --driver bridge alpine-net–fixed-cidr-v6 2001:db8:1/64

“docker network create” requires exactly 1 argument(s).

See “docker network create --help”

Hi lewish95,
Please read my original post. I have already setup Docker for IPv6, created neworks, etc. As described in my post, the problem was communication to/from a container on Docker IPv6 network with a host elsewhere on my network.

Also, please see above where I replied to my own post after finding the solution using docker-ipv6nat.