Docker Community Forums

Share and learn in the Docker community.

Complications in a NAT and proxy environment


(Sweharris) #1

My test lab is behind a NATting gateway. So the “native” IP address may be something like 172.16.1.161 but from my desktop I see it as 10.10.10.183 and only limited ports are allowed through. Servers are CentOS 7.4 fully patched. Docker EE-beta-2 downloaded yesterday.

Installing UCP is doable; use the native address for --host-address and the NAT address as a --san. (This was also required for the older UCP). This allows the UCP install to complete, and allows new servers to join the swarm.

$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
9mdjq875y1ogoqc79gxg8grkt *   master              Ready               Active              Leader
dlimfr7yj6ahj68g0xm5ozajl     minion1             Ready               Active
og5eqx6p2q1sxdcnl9gbwx4w3     minion2             Ready               Active
t4j8gb65hmq96fxj4ifywjn7k     minion3             Ready               Active

So far so good…

Now there are two immediate complications:
1: The client-certificate package. I downloaded this via my desktop and all the values inside it refer to the outside “10.*” address. This means it will not work for machines inside the test environment. Programs like kubectl try to reach out via the defined http(s) proxy servers, which refuse to connect. I had to manually edit env.sh and kube.yaml to refer to the internal addresses. Now kubectl will talk nicely

2: It seems the docker CLI tool does not take notice of the no_proxy environment variable. So when env.sh sets DOCKER_HOST variable it tries to reach out via the https proxy settings and fails. In comparison, kubectl properly follows the no_proxy setting and will happily talk.

$ env | grep -i proxy
HTTPS_PROXY=http://10.100.100.109:8080
HTTP_PROXY=http://10.100.100.109:8080
http_proxy=http://10.100.100.109:8080
https_proxy=http://10.100.100.109:8080
no_proxy=localhost,127.0.0.0/8,172.16.0.0/12

$ env | grep DOCKER_HOST
DOCKER_HOST=tcp://172.16.1.161:443

$ docker info
^C

$ DOCKER_HOST= docker info | head -2
WARNING: No kernel memory limit support
Containers: 66
 Running: 52

Compare that to kubectl

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://172.16.1.161:6443
  name: ucp_172.16.1.161:6443_sweh
contexts:
- context:
    cluster: ucp_172.16.1.161:6443_sweh
    user: ucp_172.16.1.161:6443_sweh
  name: ucp_172.16.1.161:6443_sweh
current-context: ucp_172.16.1.161:6443_sweh
kind: Config
preferences: {}
users:
- name: ucp_172.16.1.161:6443_sweh
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
$ kubectl describe nodes | head
Name:               minion3
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    com.docker.ucp.collection=shared
                    com.docker.ucp.collection.root=true
                    com.docker.ucp.collection.shared=true
                    com.docker.ucp.collection.swarm=true
                    com.docker.ucp.orchestrator.kubernetes=true
                    com.docker.ucp.orchestrator.swarm=true

So even though server is set, we can see the no_proxy variable is being obeyed

$ strace docker info 2>&1 | grep connect
connect(3, {sa_family=AF_INET, sin_port=htons(8080), sin_addr=inet_addr("10.100.100.109")}, 16) = -1 EINPROGRESS (Operation now in progress)

(Vivek Saraswat) #2

Thanks for the feedback on this. I will bring back to our engineering team–let us know if you have any questions we can help with.