Docker run fails to start command specified as CMD option (path, library no found)

I have a container where I can start an interactive shell and a target program from the shell:

docker run -it mycontainer bash 
/start.sh

However, when I start the program without going to the shell first, it fails:

docker run -it mycontainer /start.sh

It does not find the library: (libpython3.4m.so.rh-python34-1.0: cannot open shared object file).

What can be the reason?

Ubuntu 14.04.4 LTS, Docker version 1.11.2, build b9f10c9
Dockerfile is in the Github project identinetics/docker-saml2test2

Red Hat packages newer versions of Python in a form where they can’t be run without using a wrapper script that sets, among other things, the LD_LIBRARY_PATH environment variable; that’s why every single RUN line in your Dockerfile uses an scl_enable wrapper. (For a long time I was trying to run a Python 2.7 application on CentOS 6 and it was awful for exactly this reason.) When Docker runs the script, it creates a very clean environment and just runs it; it doesn’t create a login environment, and whatever you’re dropping in /etc/profile.d almost certainly isn’t getting run.

This looks from the outside like it’s mostly just a Python package. You could probably migrate the container to Ubuntu 16.04/Python 3.5 pretty straightforwardly and avoid this issue entirely, without changing any other container in your setup or any details of how you run it.

Two other options are to write an entrypoint script that runs the CMD process with whatever scl environment variables set, and to figure out what LD_LIBRARY_PATH the scl_enable script sets and set it directly with an ENV statement in the Dockerfile.

Thank you, it does work with an explicit scl_source enable.

What confused me was that docker run with an interactive shell did work. The reason seems to be, that the source scl_source enable from the docker file had set LD_LIBRARY_PATH when starting with CMD=bash, whereas CMD='bash -c ’ seems to initialize a fresh environment.