Keep env variables from entrypoint

I’ve searched for long, and i found it strange i cannot find a solution.
I create a docker container like this:
docker run -dit --name test -w /project espressif/idf bash
Then i try to build my app like:
docker exec -it test idf.py build
but idf.py is not recognized.
If i rerun the entrypoint, everything is fine but it takes some extra seconds everytime i do a build:
docker exec -dit test /opt/esp/entrypoint.sh idf.py build

But if i do:
docker run -it --name test -w /project espressif/idf bash
and do:
idf.py build
many times, I have the entrypoint variables working fine.

So, my question is: can i use the terminal that is create at my first run command every time and keeping in this way my variables:
docker run -dit --name test -w /project espressif/idf bash

Please, share your entrypoint, otherwise we can only guess. It is not clear what you mean by variables as I don’t see any reference to variables in your post. If you create the entrypoint properly, everything will run before your python app which is the argument of the entrypoint.

When you share code, use code blocks as mentioned in the below topic:

The entry point:

#!/usr/bin/env bash
set -e

. $IDF_PATH/export.sh

exec "$@"

and the $IDF_PATH/export.sh is:

#!/usr/bin/env bash
set -e

. $IDF_PATH/export.sh

exec "$@"
root@276313d4c01a:/project# cat . $IDF_PATH/export.sh
cat: .: Is a directory
# This script should be sourced, not executed.

__realpath() {
    wdir="$PWD"; [ "$PWD" = "/" ] && wdir=""
    arg=$1
    case "$arg" in
        /*) scriptdir="${arg}";;
        *) scriptdir="$wdir/${arg#./}";;
    esac
    scriptdir="${scriptdir%/*}"
    echo "$scriptdir"
}


__verbose() {
    [ -n "${IDF_EXPORT_QUIET}" ] && return
    echo "$@"
}

__script_dir(){
    # shellcheck disable=SC2169,SC2169,SC2039,SC3010,SC3028  # unreachable with 'dash'
    if [[ "$OSTYPE" == "darwin"* ]]; then
        # convert possibly relative path to absolute
        script_dir="$(__realpath "${self_path}")"
        # resolve any ../ references to make the path shorter
        script_dir="$(cd "${script_dir}" || exit 1; pwd)"
    else
        # convert to full path and get the directory name of that
        script_name="$(readlink -f "${self_path}")"
        script_dir="$(dirname "${script_name}")"
    fi
    if [ "$script_dir" = '.' ]
    then
       script_dir="$(pwd)"
    fi
    echo "$script_dir"
}

__is_dir_esp_idf(){
    if [ ! -f "$1/tools/idf.py" ] || [ ! -f "$1/tools/idf_tools.py" ]
    then
        # Echo command here is not used for printing to the terminal, but as non-empty return value from function.
        echo "THIS DIRECTORY IS NOT ESP-IDF"
    fi
}

__main() {
    # The file doesn't have executable permissions, so this shouldn't really happen.
    # Doing this in case someone tries to chmod +x it and execute...

    # shellcheck disable=SC2128,SC2169,SC2039,SC3054 # ignore array expansion warning
    if [ -n "${BASH_SOURCE-}" ] && [ "${BASH_SOURCE[0]}" = "${0}" ]
    then
        echo "This script should be sourced, not executed:"
        # shellcheck disable=SC2039,SC3054  # reachable only with bash
        echo ". ${BASH_SOURCE[0]}"
        return 1
    fi

    # If using bash or zsh, try to guess IDF_PATH from script location.
    self_path=""
    # shellcheck disable=SC2128  # ignore array expansion warning
    if [ -n "${BASH_SOURCE-}" ]
    then
        self_path="${BASH_SOURCE}"
    elif [ -n "${ZSH_VERSION-}" ]
    then
        self_path="${(%):-%x}"
    fi

    script_dir="$(__script_dir)"
    # Since sh or dash shells can't detect script_dir correctly, check if script_dir looks like an IDF directory
    is_script_dir_esp_idf=$(__is_dir_esp_idf "${script_dir}")

    if [ -z "${IDF_PATH}" ]
    then
        # IDF_PATH not set in the environment.

        if [ -n "${is_script_dir_esp_idf}" ]
        then
            echo "Could not detect IDF_PATH. Please set it before sourcing this script:"
            echo "  export IDF_PATH=(add path here)"
            return 1
        fi
        export IDF_PATH="${script_dir}"
        echo "Setting IDF_PATH to '${IDF_PATH}'"
    else
        # IDF_PATH came from the environment, check if the path is valid
        # Set IDF_PATH to script_dir, if script_dir looks like an IDF directory
        if [ ! "${IDF_PATH}" = "${script_dir}" ] && [ -z "${is_script_dir_esp_idf}" ]
        then
            # Change IDF_PATH is important when there are 2 ESP-IDF versions in different directories.
            # Sourcing this script without change, would cause sourcing wrong export script.
            echo "Resetting IDF_PATH from '${IDF_PATH}' to '${script_dir}' "
            export IDF_PATH="${script_dir}"
        fi
        # Check if this path looks like an IDF directory
        is_idf_path_esp_idf=$(__is_dir_esp_idf "${IDF_PATH}")
        if [ -n "${is_idf_path_esp_idf}" ]
        then
            echo "IDF_PATH is set to '${IDF_PATH}', but it doesn't look like an ESP-IDF directory."
            echo "If you have set IDF_PATH manually, check if the path is correct."
            return 1
        fi

        # The varible might have been set (rather than exported), re-export it to be sure
        export IDF_PATH="${IDF_PATH}"
    fi

    old_path="$PATH"

    echo "Detecting the Python interpreter"
    . "${IDF_PATH}/tools/detect_python.sh"

    echo "Checking Python compatibility"
    "$ESP_PYTHON" "${IDF_PATH}/tools/python_version_checker.py"

    __verbose "Checking other ESP-IDF version."
    idf_deactivate=$("$ESP_PYTHON" "${IDF_PATH}/tools/idf_tools.py" export --deactivate) || return 1
    eval "${idf_deactivate}"

    __verbose "Adding ESP-IDF tools to PATH..."
    # Call idf_tools.py to export tool paths
    export IDF_TOOLS_EXPORT_CMD=${IDF_PATH}/export.sh
    export IDF_TOOLS_INSTALL_CMD=${IDF_PATH}/install.sh
    # Allow calling some IDF python tools without specifying the full path
    # ${IDF_PATH}/tools is already added by 'idf_tools.py export'
    IDF_ADD_PATHS_EXTRAS="${IDF_PATH}/components/esptool_py/esptool"
    IDF_ADD_PATHS_EXTRAS="${IDF_ADD_PATHS_EXTRAS}:${IDF_PATH}/components/espcoredump"
    IDF_ADD_PATHS_EXTRAS="${IDF_ADD_PATHS_EXTRAS}:${IDF_PATH}/components/partition_table"
    IDF_ADD_PATHS_EXTRAS="${IDF_ADD_PATHS_EXTRAS}:${IDF_PATH}/components/app_update"

    idf_exports=$("$ESP_PYTHON" "${IDF_PATH}/tools/idf_tools.py" export "--add_paths_extras=${IDF_ADD_PATHS_EXTRAS}") || return 1
    eval "${idf_exports}"
    export PATH="${IDF_ADD_PATHS_EXTRAS}:${PATH}"

    __verbose "Checking if Python packages are up to date..."
    "$ESP_PYTHON" "${IDF_PATH}/tools/idf_tools.py" check-python-dependencies || return 1

    if [ -n "$BASH" ]
    then
        path_prefix=${PATH%%${old_path}}
        # shellcheck disable=SC2169,SC2039  # unreachable with 'dash'
        if [ -n "${path_prefix}" ]; then
            __verbose "Added the following directories to PATH:"
        else
            __verbose "All paths are already set."
        fi
        old_ifs="$IFS"
        IFS=":"
        for path_entry in ${path_prefix}
        do
            __verbose "  ${path_entry}"
        done
        IFS="$old_ifs"
        unset old_ifs
    else
        __verbose "Updated PATH variable:"
        __verbose "  ${PATH}"
    fi

    uninstall=$("$ESP_PYTHON" "${IDF_PATH}/tools/idf_tools.py" uninstall --dry-run) || return 1
    if [ -n "$uninstall" ]
    then
        __verbose ""
        __verbose "Detected installed tools that are not currently used by active ESP-IDF version."
        __verbose "${uninstall}"
        __verbose "To free up even more space, remove installation packages of those tools. Use option '${ESP_PYTHON} ${IDF_PATH}/tools/idf_tools.py uninstall --remove-archives'."
        __verbose ""
    fi

    __verbose "Done! You can now compile ESP-IDF projects."
    __verbose "Go to the project directory and run:"
    __verbose ""
    __verbose "  idf.py build"
    __verbose ""
}

__cleanup() {
    unset old_path
    unset paths
    unset path_prefix
    unset path_entry
    unset IDF_ADD_PATHS_EXTRAS
    unset idf_exports
    unset idf_deactivate
    unset ESP_PYTHON
    unset SOURCE_ZSH
    unset SOURCE_BASH
    unset WARNING_MSG
    unset uninstall
    unset is_idf_path_esp_idf
    unset is_script_dir_esp_idf

    unset __realpath
    unset __main
    unset __verbose
    unset __enable_autocomplete
    unset __cleanup
    unset __is_dir_esp_idf

    # Not unsetting IDF_PYTHON_ENV_PATH, it can be used by IDF build system
    # to check whether we are using a private Python environment

    return $1
}


__enable_autocomplete() {
    click_version="$(python -c 'import click; print(click.__version__.split(".")[0])')"
    if [ "${click_version}" -lt 8 ]
    then
        SOURCE_ZSH=source_zsh
        SOURCE_BASH=source_bash
    else
        SOURCE_ZSH=zsh_source
        SOURCE_BASH=bash_source
    fi
    if [ -n "${ZSH_VERSION-}" ]
    then
        autoload -Uz compinit && compinit -u
        eval "$(env _IDF.PY_COMPLETE=$SOURCE_ZSH idf.py)" || echo "WARNING: Failed to load shell autocompletion for zsh version: $ZSH_VERSION!"
    elif [ -n "${BASH_SOURCE-}" ]
    then
        WARNING_MSG="WARNING: Failed to load shell autocompletion for bash version: $BASH_VERSION!"
        # shellcheck disable=SC3028,SC3054,SC2086  # code block for 'bash' only
        [ ${BASH_VERSINFO[0]} -lt 4 ] && { echo "$WARNING_MSG"; return; }
        eval "$(env LANG=en _IDF.PY_COMPLETE=$SOURCE_BASH idf.py)"  || echo "$WARNING_MSG"
    fi
}

__main && __enable_autocomplete
__cleanup $?

I can run my idf.py build command fine if i do:
docker exec -it test /opt/esp/entrypoint.sh idf.py build
But the problem with this is that i have to wait to the entrypoint script to finish.

But, finally i believe i found what i was looking for:
If i do:
docker run -dit --name test -w /project espressif/idf
i have a bash with entrypoint.sh loaded but in the background (-d, detached).
Then i do:
docker attach --detach-keys="%" test and i go back to my nice terminal where idf.py build works.
With % i go back to my host, and vice versa. So, i guess this is like i have only one terminal running in the container with entrypoint loaded and i attach and de-attach to that.

I think I know what the misunderstaning is. The entrypoint is only for preparing the application to start. When you use Docker exec, the entrypoint will not run, only your command. It is like using different terminal windows.

If you want environment variables to work when using docker exec you need to declare it in the Dockerfile like this:

ENV PATH="$PATH:/path/to/bin"

When you used docker attach you attached to the same “terminal” in which you oiginally started bash.

If you use a volume, you can just use docker run, save everything to the volume and remove the container without running it in detached mode.

docker run -it --name test -w /project -v "$(pwd)/data:/data" espressif/idf idf.py build

or

docker run -it --name test -w /project -v "data:/data" espressif/idf idf.py build

Just make sure you save everything in /data.

Is this container for development? If it is not, I would just add the python build command to the Dockerfile.

Yes, this is a possible way, but as you see from the export.sh the env variables are much more complicated to set “by hand” in the dockerfile.

docker run doesnt work for me as the mounted volumes are not synchronised because i use docker context to a remote docker.

For now, attach is the best way because i have one “good” bash terminal that is set up with the entrypoint.sh and has all the env variables ready.
The only addition it would be nice to have, is to send the commands without entering “inside” the docker’s container terminal. As an example:
I create my container like this:
docker run -dit --name test -w /project espressif/idf
and use docker attach test to run my nice commands, like idf.py build and everything works but from inside container’s terminal.
But is there any way to the same without entering containers terminal?
Like this: docker attach test idf.py build, similarly to docker exec test idf.py build?

It is like @rimelek wrote: docker exec creates a new process (=sibling process to the process created by the entypoint script) inside the container. Since this new process is not a child process of the process created by the entrpoint script, no variables declared within the execution of the entrypoint script are available in the other process.

I am not sure if prefixing the variable assignment with export helps the situation, as export makes the variable available to the parent process. Though, since the process from the entrypoint and the process from exec do not share the same parent process, I would be surprised if the exec process would be able to see the variables.

You could still test it with one of the variables, if exporting them makes them visible in the exec process. After updating the entry point script, recreating the image and starting a container, you can use docker exec ... env to check whether the variable is available.

If this doesn’t work, then I am afraid you already know your options:

  • either declare the variables as env variables when creating the container (`-e key1=va1 -e key2=val2…)
  • use the exec command you already figured out: docker exec -it test /opt/esp/entrypoint.sh idf.py build
  • stick to docker attach

I meant only to change PATH as you could see the example in my previous post so at least idf.py would be found… If you need other variables too, you need to run the export script again. A simple build.sh would solve it copied into the image. Te script would run the command that you want and export variables before it as it is in the entrypoint. Or you can do what @meyay suggested reusing the entrypoint. That is the easiest actually.

Many IDE support file syncronization. If you don§t use any IDE, you can just use rsync

You don’t have to rely on Docker.