Docker Community Forums

Share and learn in the Docker community.

Docker worker hostname from container

Hi

I am very new to docker. Is there anyway to get the docker worker node name from inside a container.
We use docker service create and hence passing it as env variable of $HOSTNAME only sends the hostname of docker manager.

Please help

Hi All,

the docker version if needed is Docker version 17.06.2-ee-25.

i know in ce version we can use {{.Node.Hostname}}. But its not avaialble in ee.

Pull the container from the NGC container registry to the server. See Pulling A Container.
On the server, create a subdirectory called mydocker.
Note: This is an arbitrary directory name.
Inside this directory, create a file called Dockerfile (capitalization is important). This is the default name that Docker looks for when creating a container. The Dockerfile should look similar to the following:

[username ~] mkdir mydocker [username ~] cd mydocker
[username mydocker] vi Dockerfile [username mydocker] more Dockerfile
FROM nvcr.io/nvidia/tensorflow:19.03

RUN apt-get update

RUN apt-get install -y octave
[username mydocker] There are three lines in the Dockerfile. The first line in the Dockerfile tells Docker to start with the container nvcr.io/nvidia/tensorflow:17.06. This is the base container for the new container. The second line in the Dockerfile performs a package update for the container. It doesn’t update any of the applications in the container, but updates the apt-get database. This is needed before we install new applications in the container. The third and last line in the Dockerfile tells Docker to install the package octave into the container using apt-get. The Docker command to create the container is: docker build -t nvcr.io/nvidian_sas/tensorflow_octave:17.06_with_octave
Note: This command uses the default file Dockerfile for creating the container.
The command starts with docker build. The -t option creates a tag for this new container. Notice that the tag specifies the project in the nvcr.io repository where the container is to be stored. As an example, the project nvidian_sas was used along with the repository nvcr.io.

Projects can be created by your local administrator who controls access to nvcr.io, or they can give you permission to create them. This is where you can store your specific containers and even share them with your colleagues.

[username mydocker]$ docker build -t nvcr.io/nvidian_sas/tensorflow_octave:19.03_with_octave .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM nvcr.io/nvidia/tensorflow:1903
––> 56f2980ble37
Step 2/3 : RUN apt-get update
––> Running in 69cffa7bbadd
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:2 http://ppa.launchpad.net/openjdk-r/ppa/ubuntu xenial InRelease [17.5 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:4 http://ppa.launchpad.net/openjdk-r/ppa/ubuntu xenial/main amd64 Packages [7096 B]
Get:5 http://security.ubuntu.com/ubuntu xenial-security/universe Sources [42.0 kB]
Get:6 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [380 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:8 http://security.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [12.8 kB]
Get:9 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [178 kB]
Get:10 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [2931 B]
Get:11 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Get:12 http://archive.ubuntu.com/ubuntu xenial/universe Sources [9802 kB]
Get:13 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1558 kB]
Get:14 http://archive.ubuntu.com/ubuntu xenial/restricted amd64 Packages [14.1 kB]
Get:15 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [9827 kB]
In the brief output from the docker build … command shown above, each line in the Dockerfile is a Step. In the screen capture, you can see the first and second steps (commands). Docker echos these commands to the standard out (stdout) so you can watch what it is doing or you can capture the output for documentation.

After the image is built, remember that we haven’t stored the image in a repository yet, therefore, it’s a docker image. Docker prints out the image id to stdout at the very end. It also tells you if you have successfully created and tagged the image.

If you don’t see Successfully … at the end of the output, examine your Dockerfile for errors (perhaps try to simplify it) or try a very simple Dockerfile to ensure that Docker is working properly.

Please ignore thre response of lewish95, it is entirely paste from the list items 1, 2, 3 from chapter 2.4 of https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html#building.

According docs (see bottom of the page) the template variable you try to use was not implemented in 17.06. You can not expect an older EE version to have features a more recent CE version has.

With a valid subscription, you can update to a more recent version of the engine.
if you run a Docker-EE engine without a valid subscription It is a license violation… Even if you downloaded the binaries in a timeframe your subscription was valid.