Docker Community Forums

Share and learn in the Docker community.

Cannot assign requested address on dockerized services

Guys, i need help.

I am getting SocketException: Cannot assign requested address. It happens when i try to access serviceA by serviceB via Refit. If run one of the services on IIS or in Visual studio debugger everything works fine. But when i run them both in docker containers, system throw that error.

it also happens when use Ocelot. If the gateway service is on IIS or run in debugger (separate from other services) it works, but when I put it in container with the other services i get the error.

What could it be?

WIN 10
VS 2019/VS Code
Docker desktop
WEB API CORE 3
Refit - https://github.com/reactiveui/refit
Ocelot - https://github.com/ThreeMammals/Ocelot

Thank you

Once you have Docker installed, you can pull the latest TensorFlow Serving docker image by running:

docker pull tensorflow/serving

This will pull down an minimal Docker image with TensorFlow Serving installed.

See the Docker Hub tensorflow/serving repo for other versions of images you can pull.

Running a serving image
The serving images (both CPU and GPU) have the following properties:

Port 8500 exposed for gRPC
Port 8501 exposed for the REST API
Optional environment variable MODEL_NAME (defaults to model)
Optional environment variable MODEL_BASE_PATH (defaults to /models)
When the serving image runs ModelServer, it runs it as follows:

tensorflow_model_server --port=8500 --rest_api_port=8501
–model_name={MODEL_NAME} --model_base_path={MODEL_BASE_PATH}/${MODEL_NAME}

To serve with Docker, you’ll need:

An open port on your host to serve on
A SavedModel to serve
A name for your model that your client will refer to
What you’ll do is run the Docker container, publish the container’s ports to your host’s ports, and mounting your host’s path to the SavedModel to where the container expects models.

Let’s look at an example:

docker run -p 8501:8501
–mount type=bind,source=/path/to/my_model/,target=/models/my_model
-e MODEL_NAME=my_model -t tensorflow/serving

In this case, we’ve started a Docker container, published the REST API port 8501 to our host’s port 8501, and taken a model we named my_model and bound it to the default model base path ({MODEL_BASE_PATH}/{MODEL_NAME} = /models/my_model). Finally, we’ve filled in the environment variable MODEL_NAME with my_model, and left MODEL_BASE_PATH to its default value.