Docker Community Forums

Share and learn in the Docker community.

Wget inside docker container does not work

(Rubuschl) #1

Hi, I’m not sure if this is the right place.

Currently I’m creating an Image and login to the Container in two ways. I’m having problems with both ways and I’m not sure how I should do it “right”.

I’m running Ubuntu 15.10 (Host), Docker 10 and prepared an Ubuntu 14.04 LTS image, since I want to develop a BSP with an older Buildroot '14 Buildsystem. The image I tagged v0.0.101. I’m behind a proxy. Host has valid proxy settings, which I copied into the /etc/environment (image). Ping and DNS resolution are working on host. In the image I created user “myuser” who also exists on host.

Variation A )
$ docker run -ti ubuntu-14.04:0.0.101 su myuser
Running it like this allows me to go into buildroot, and build the specified defconfig. Buildroot downloads packages, and runs them until a package e.g. gettext which checks if a working “fork()” call. This seems to fork the su “myuser” into the child call, forks out the parent which ends immediately and thus I’ll find myself outside with the host’s prompt. The container stopped. I assume child process is killed, since parent ended. I never achieve to run buildroot further then to this package with the specified fork()-test.

Variation B )
$ docker run --user myuser -ti ubuntu-14.04:0.0.101 /bin/bash
fork() now is not a problem, but no further packages can be downloaded. Ping works, but wget fails. I somehow quickfixed this by executing inside the container “su myuser” which solves the problem. Somehow I seem to be logged in to the specified user, but the environment is not set correctly.

What would be the correct way for me to run an image and create a container?


For the variation A, you are telling the container to run ‘su myuser’. In this scenario, once su has completed/ended/terminated for any reason, the container will stop.

How exactly is wget failing and do you have a specific requirement to ‘su myuser’?

Perhaps I’m misunderstanding what you are trying to accomplish, but if you’re objective is to create your own image so you can run containers from it, you should be creating a Dockerfile first. Have you been through the documentation regarding the Dockerfile?

The first link is step four of the Getting Started tutorial.

This link is the Dockerfile reference.

(Rubuschl) #3

Hi tcrockett,
Thank you for your reply! As a quickfix I figured out running a container from the image with “–user myuser” and then executing a “su myuser”.

As you mentioned, “su myuser” is just performing one session, if by e.g. a fork this will end, the container also stops. On the other side, a “run --user myuser” seems to perform a different initialization, so that my proxy data were not fully available to e.g. wget (while apt was not an issue, due to a set /etc/apt/apt.conf), I was confused about this behavior.

As I mentioned, currently I’m using the docker container rather similar to a changeroot. So, yes, I’m through the getting started and overflew the build and setting up recipes. I wanted first do develop a container, to see what I will need for my buildroot, the cross toolchain, etc. in order to set up a list of packages I need for ADD to a base container. I.e. I’m currently using docker for also debugging my toolchain, don’t know if this is rather abusing it for that purpose. Do you think it would be better to spend rather more time in setting up a reasonable build config for docker and build the image directly with docker build?

Sry, I feel my questions seem somehow stupid, but I’m a bit confused, and thus I’m using docker similar to what I know best: chroot. :slight_smile:


Hey, no worries about the questions, I just want to make sure we’re talking about the same thing and have covered the basics. What better place to get help with Docker than the Docker forum! :slight_smile:

Personally, I do think your time would be well spent to setup a Dockerfile and leverage docker build to create your image. This way you get one of the key benefits of Docker, which is “repeatability” when running your containers and you get to learn more about Docker in the process.

Regarding running a container simply so you can have a shell to install tools and their dependencies for your buildroot, absolutely! Also, I don’t see any need for you to run su in the container for these particular tasks. This here should do the trick, giving you a shell prompt where you can start downloading and installing your tools:

docker run -it ubuntu:14.04.4 /bin/bash

Finally, with respect to debugging your toolchain, I don’t see any issues there either. However, if debugging entails downloading/installing extra tools only used for debugging, then I would recommend creating a separate Dockerfile that would include the extra debug tools.