I would expect that because I’ve been “sold” docker for mac not “docker under a different virtualization engine.” FitNesse tests that run under docker on mac in a virtual box work, but do not work “natively” because FitNesse can not connect to the exposed web interfaces running in the docker container.
Documentation on how to use --net=host under OSX would be great. What documentation I’ve found so far leads me to believe that --net=host should work on mac the way it works on Linux.
What’s the advantage of “docker for mac” if it behaves differently than “docker for Linux?”
“Faster and more reliable: no more VirtualBox! The Docker engine is running in an Alpine Linux distribution on top of an xhyve Virtual Machine on Mac OS X or on a Hyper-V VM on Windows, and that VM is managed by the Docker application. You don’t need docker-machine to run Docker for Mac and Windows.”
I’m having difficulty finding any documentation from Docker that states that Docker for Mac is anything other than docker with a different VM…
Well, I guess since this issue isn’t going to be accepted as an issue or fixed, then Docker for make is a non-starter for me.
I assumed “Docker for mac” meant my Docker images that work out of the box in Linux would work in Mac. They don’t and when I report they don’t, I’m told “that’s not the goal”. If that’s not the goal, then it’s not a solution for me.
I’m sorry that Docker for Mac doesn’t fit your needs. I hope in the future, that OSes can, similar to FreeBSD provide a syscall interface that is Linux compatible, allowing for docker to run actually native, instead of just through a VM.
Windows has been working towards this with Windows Subsystems for Linux, but obviously for people who need Docker “NAO!” on Windows, a VM is the only option.
Sadly, I don’t know of any projects to do something similar for OSX.
I assumed “Docker for mac” meant my Docker images that work out of the box in Linux would work in Mac.
If you need Linux on your Mac, run VirtualBox (using Vagrant?) and run Docker inside those VMs. You should do this especially if you need to test your apps on a multi-host setup.
You can also run “Docker in Docker” in Docker for Mac, so you can run a swarm of Docker daemons which might be enough of a test environment for your multi-host apps. Within the swarm, you can use Docker networks to isolate your services to only certain user defined networks.
I’ve been using Docker for Mac since the public beta has come out, and while it is still beta, it has been a huge improvement over Docker Toolbox using VirtualBox VMs. I don’t think Docker for Mac is intended for deployment of your apps, only so you can test them on your laptop during development. For this purpose, Docker for Mac is a huge step in the right direction.
The latest update seems to work, or what I was trying earlier didn’t. Mapping ports with -p now works flawlessly, now I can use docker for mac in the same manner as other developers do using VirtualBox and a linux distro.
Here’s an example of something that you could do with docker-machine that I haven’t been able to solve on Docker for Mac:
Take Kafka running on Zookeeper. The new producer for Kafka requires that you initially connect to a broker and then Zookeeper gives you a list of broker hostnames and ports to use for all other communication. This list is going to be hostnames that are resolvable inside of Docker, even if the initial connection was established on a published port on localhost.
With docker-machine, it was possible to set Kafka to advertise a hostname that worked both inside and outside of Docker, giving you the ability to publish to topics from, say, a REPL on a Dockerized Kafka broker, while not breaking communication with Kafka from other containers. This doesn’t seem possible with Docker for Mac, and Kakfa is far from the only kind of distributed system that works on this kind of bootstrap discovery model.
If you have --net=host then any ports mapped with -p are not exported. You seem to get one or the other on Docker for Mac. At least that’s the behavior a couple releases ago. We have scripts to work around this, so I can’t verify it hasn’t been fixed in the latest update.
In my setup, I’m still using docker toolbox for MAC (i.e. virtualbox as docker machine) and a workaround to make this behaviour happens is to establish a reverse tunnel between MAC OSX and docker-machine
ssh -t -R8000:localhost:8000 docker@$(docker-machine ip dev)
after having the tunnel opened, I can “docker run -it --rm --net=host buildpack-deps:curl curl localhost:8000” and get the desired behaviour
I know it is just a work around… but it is there… just in case it could be useful for somebody
you are welcome to try it on xhyve as well, I think it should work AS-IS
It would be really nice if you documented that this does not work the way one would expect. Anyone that has read the marketing for Docker for Mac hears that the OSX host acts like the host. Look at the -p and -v options for mapping.
–net=host simply does not behave the same way, and --net=host combined with -p : behaves in a VERY surprising way.
The very least you could do is document this behavior. It seems reasonable to warn anyone who actually tries to use these flags when they use them that they are not going to get what they would on Linux.
I mean, yeah, that sounds reasonable. But I’m not actually, myself, connected with Docker. I’m just a random person who has invested a lot of time and interest into various emulation and virtualization stuff, and then worked at Google, and when I left, I didn’t have borg. So…
you might be able to find an official contact method to get in touch with someone, or possibly file a “bug” noting that documentation isn’t clear about --net=host outside of Linux. (The same problem is going to manifest in Windows as well.)
I agree with the original poster. Quite simply if something isn’t working the same it should be documented as such. All the technical explanations in the world about why do not change the the issue, nor that the documentation misrepresents it. Funnily enough I came here with the same problem, but on linux - I guess mine is different. Hope it’s all working good now anyway.
In my opinion, you should check your iptables rules. To access the port of your container from your host, you must open the corresponding port, for example 80 of the host machine (because docker does not create an iptables rule to redirect port 80 of your machine to port 80 of the container because the container directly uses the port 80 of your machine, distributions like centos have rules of firewall which blocks the connection on the ports)
Pour autoriser une connexion sur un port : sudo iptables -I INPUT -p tcp -m tcp --dport 80 -j ACCEPT