None of Docker versions work stable on a loaded host

Here is the setup. I have a last version of Debian host (8.5 jessie, kernel 3.16.7). Currently I’m using Docker 1.9.1 to host around 500 running containers on it. During the day some of these containers (~50-100) are terminated and new ones are launched instead of them.
The problem is that once a week I reproduce the fixed in Docker 1.10 bug with network:
Well, I tried both 1.10 and 1.11 versions, the network issue went, but a new one came: once per 1-2 weeks my host unexpectedly crashes with very similar symptoms described in

I noticed that Docker Cloud uses Docker 1.9.1 as well. To be more precise 1.9.1-cs3, anyway the appropriate fix that prevents IP database corruption isn’t included in this version. I wonder does Docker Cloud experience the mentioned issue?

Is there a combination of Docker version+OS+kernel that one can recommend me to use to have a stable and reliable setup?

With 500 containers failure of a few (1-2) containers is expected but 50-100 is too many. Use Kubernetes cluster manager for the containers. How are the containers getting re-launched without a cluster manager?

It seems I’ve described my workflow not clear enough. 50-100 containers are not terminated by itself, but my application removes containers periodically and starts new ones. I just wanted to emphasize that my containers are not long-lived, they are constantly rotated.