Veth* interfaces conflict with aws cli endpoint

Expected behavior

When using a user defined bridge network I should be able to reach the aws security credentials ip

Actual behavior

While containers are running I can’t reach which is hardcoded into the aws cli tool

Additional Information

Steps to reproduce the behavior

  1. docker network create mynet0 --gateway --subnet
  2. docker run --net mynet0 --name nginx nginx
  3. ifconfig
# you can see this veth interface has been created with broadcast that shows a range that covers the AWS cli METADATA_SECURITY_CREDENTIALS_URL
veth57124c7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet  netmask  broadcast
        inet6 fe80::8c92:fdff:fed0:b194  prefixlen 64  scopeid 0x20<link>
        ether 8e:92:fd:d0:b1:94  txqueuelen 0  (Ethernet)
        RX packets 2  bytes 180 (180.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 54  bytes 6451 (6.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  1. AWS_DEFAULT_REGION=eu-west-1 aws --debug ecr get-login --no-include-email
2018-05-23 12:30:30,632 - MainThread - botocore.utils - DEBUG - Caught exception while trying to retrieve credentials: HTTPConnectionPool(host='', port=80): Max retries exceeded with url: /latest/meta-data/iam/security-credentials/ (Caused by ConnectTimeoutError(<botocore.awsrequest.AWSHTTPConnection object at 0x7fb523771e50>, 'Connection to timed out. (connect timeout=1)'))
  1. docker stop nginx
  2. AWS_DEFAULT_REGION=eu-west-1 aws --debug ecr get-login --no-include-email