Docker Community Forums

Share and learn in the Docker community.

Docker Says New Python Version Required

Running Docker version 20.10.1, build 831ebea on Linux Mint 20 Ulyana. It is a machine learning model serving application that runs with flask (WSGI), celery (task queue) and redis (message broker). This is how the docker file looks like.

FROM python:3.6.9
WORKDIR /app

COPY ./Models ./Models
COPY ./src ./src
COPY pip_requirements.txt ./
COPY run_manager.sh ./

ENV C_FORCE_ROOT='true'
ENV PATH="/root/.local/bin:${PATH}"

RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections \
    && apt-get --assume-yes update \
    && apt-get --assume-yes install --no-install-recommends apt-utils \
    && apt-get --assume-yes upgrade \
    && apt-get --assume-yes install libsnappy-dev \
    && apt-get --assume-yes install redis-server \
    && apt-get --assume-yes install python3-celery \
    && python3 -m pip install --user --no-warn-script-location --requirement pip_requirements.txt \
    && rm pip_requirements.txt

EXPOSE 7000

As you can see, the python packages are installed from pip, and one of the necessary packages is fastparquet==0.3.2.

The image was building and the container was running just perfectly…till about a week back. Then just today, this error happened, particularly when the pip installer was collecting fastparquet.

ERROR: Command errored out with exit status 1:
     command: /usr/local/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-3zptxm6d/fastparquet/setup.py'"'"'; __file__='"'"'/tmp/pip-install-3zptxm6d/fastparquet/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-3zptxm6d/fastparquet/pip-egg-info
         cwd: /tmp/pip-install-3zptxm6d/fastparquet/
    Complete output (68 lines):
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 154, in save_modules
        yield saved
      File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
        yield
      File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 250, in run_setup
        _execfile(setup_script, ns)
      File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 45, in _execfile
        exec(code, globals, locals)
      File "/tmp/easy_install-3cxu7pbx/numpy-1.20.0rc2/setup.py", line 30, in <module>
        else:
    RuntimeError: Python version >= 3.7 required.

My question: Is not docker supposed to maintain the versions of the libraries and install the exact same versions in any machine when the image is being built? Even if somehow the latest version of fastparquet is incompatible with Python<3.7, why would I need to change the environment when I am in docker container and using the same version of fastparquet as before?

From my rather rudimentary knowledge of docker, I was operating under the assumption that any docker file being built today in my Linux-Mint Desktop, will build exactly the same way in any Linux box that has dockerd running, even decade from today, no matter what new dependency library upgrades appeared meanwhile. Is not that the reason we use docker? So any pointer around this area would be helpful, particularly, why the same dockerfile working in 2020 is breaking in 2021 (yes, this is my first docker build of the year, and happy new year everyone).

An image is a point it time snappshot of your application, its dependencies, its config files and hopefully a clever entrypoint script that prepares the container configuration before the main process is started.

If you re-build your Image chances are high that you don’t end up with the exact same version of dependencies in the build image, as dependencies fetched from public repos packages will be affected by changes you can not control.

You should use a proper basic image for your container, like:

docker run -it --rm python:3.6 bash

docker run -it --rm python:3.6-alpine sh

Thanks for the response. I understand that changes to the public repositories are beyond my control, but do not official repos provide a loose guarantee not to break my code, at least not often? Specific to my example, I started from the base image of python:3.6.9 and then tried to install fastparquet==0.3.2 there. So what changed in the interim, the base image itself, or the fastparquet? If they released a new version of python or fastparquet, they would be released as something like python:3.9.0 or fastparquet==1.3.5 etc. instead of changing the old versions, not?

Also, how do I avoid these situations in future to keep the dependencies compatible (which is the whole point of Docker)? In my Dockerfile, did I specify the requirements too tightly, or too loosely?

Thanks for the advice. From what I read elsewhere in the internet, although alpine is known for providing a minimal image, it is not recommended for noobs (which I am) as it requires a few hacks to make it work with some other libraries. Also, most of the commands in my dockerfile are not supported by alpine out of the box and I have to build the system ground up.

Is the official image of python:3.6.9 based on Debian buster not proper? Or will the alpine image never get changed to break my code?

And another steal by the one shot bot lewish95: https://stackoverflow.com/a/52879187/3460948
Don’t expect any follow ups on its posts.

You are aware that dependencies have dependencies (which can have more dependencies) as well, So just “pinning” the phyton version and one or more specific packages does not pin the whole dependency tree, does it?

Like I already wrote: without your own privat repository, you have no controll about the repo state.