Keep env variables from entrypoint

I’ve searched for long, and i found it strange i cannot find a solution.
I create a docker container like this:
docker run -dit --name test -w /project espressif/idf bash
Then i try to build my app like:
docker exec -it test build
but is not recognized.
If i rerun the entrypoint, everything is fine but it takes some extra seconds everytime i do a build:
docker exec -dit test /opt/esp/ build

But if i do:
docker run -it --name test -w /project espressif/idf bash
and do: build
many times, I have the entrypoint variables working fine.

So, my question is: can i use the terminal that is create at my first run command every time and keeping in this way my variables:
docker run -dit --name test -w /project espressif/idf bash

Please, share your entrypoint, otherwise we can only guess. It is not clear what you mean by variables as I don’t see any reference to variables in your post. If you create the entrypoint properly, everything will run before your python app which is the argument of the entrypoint.

When you share code, use code blocks as mentioned in the below topic:

The entry point:

#!/usr/bin/env bash
set -e


exec "$@"

and the $IDF_PATH/ is:

#!/usr/bin/env bash
set -e


exec "$@"
root@276313d4c01a:/project# cat . $IDF_PATH/
cat: .: Is a directory
# This script should be sourced, not executed.

__realpath() {
    wdir="$PWD"; [ "$PWD" = "/" ] && wdir=""
    case "$arg" in
        /*) scriptdir="${arg}";;
        *) scriptdir="$wdir/${arg#./}";;
    echo "$scriptdir"

__verbose() {
    [ -n "${IDF_EXPORT_QUIET}" ] && return
    echo "$@"

    # shellcheck disable=SC2169,SC2169,SC2039,SC3010,SC3028  # unreachable with 'dash'
    if [[ "$OSTYPE" == "darwin"* ]]; then
        # convert possibly relative path to absolute
        script_dir="$(__realpath "${self_path}")"
        # resolve any ../ references to make the path shorter
        script_dir="$(cd "${script_dir}" || exit 1; pwd)"
        # convert to full path and get the directory name of that
        script_name="$(readlink -f "${self_path}")"
        script_dir="$(dirname "${script_name}")"
    if [ "$script_dir" = '.' ]
    echo "$script_dir"

    if [ ! -f "$1/tools/" ] || [ ! -f "$1/tools/" ]
        # Echo command here is not used for printing to the terminal, but as non-empty return value from function.

__main() {
    # The file doesn't have executable permissions, so this shouldn't really happen.
    # Doing this in case someone tries to chmod +x it and execute...

    # shellcheck disable=SC2128,SC2169,SC2039,SC3054 # ignore array expansion warning
    if [ -n "${BASH_SOURCE-}" ] && [ "${BASH_SOURCE[0]}" = "${0}" ]
        echo "This script should be sourced, not executed:"
        # shellcheck disable=SC2039,SC3054  # reachable only with bash
        echo ". ${BASH_SOURCE[0]}"
        return 1

    # If using bash or zsh, try to guess IDF_PATH from script location.
    # shellcheck disable=SC2128  # ignore array expansion warning
    if [ -n "${BASH_SOURCE-}" ]
    elif [ -n "${ZSH_VERSION-}" ]

    # Since sh or dash shells can't detect script_dir correctly, check if script_dir looks like an IDF directory
    is_script_dir_esp_idf=$(__is_dir_esp_idf "${script_dir}")

    if [ -z "${IDF_PATH}" ]
        # IDF_PATH not set in the environment.

        if [ -n "${is_script_dir_esp_idf}" ]
            echo "Could not detect IDF_PATH. Please set it before sourcing this script:"
            echo "  export IDF_PATH=(add path here)"
            return 1
        export IDF_PATH="${script_dir}"
        echo "Setting IDF_PATH to '${IDF_PATH}'"
        # IDF_PATH came from the environment, check if the path is valid
        # Set IDF_PATH to script_dir, if script_dir looks like an IDF directory
        if [ ! "${IDF_PATH}" = "${script_dir}" ] && [ -z "${is_script_dir_esp_idf}" ]
            # Change IDF_PATH is important when there are 2 ESP-IDF versions in different directories.
            # Sourcing this script without change, would cause sourcing wrong export script.
            echo "Resetting IDF_PATH from '${IDF_PATH}' to '${script_dir}' "
            export IDF_PATH="${script_dir}"
        # Check if this path looks like an IDF directory
        is_idf_path_esp_idf=$(__is_dir_esp_idf "${IDF_PATH}")
        if [ -n "${is_idf_path_esp_idf}" ]
            echo "IDF_PATH is set to '${IDF_PATH}', but it doesn't look like an ESP-IDF directory."
            echo "If you have set IDF_PATH manually, check if the path is correct."
            return 1

        # The varible might have been set (rather than exported), re-export it to be sure
        export IDF_PATH="${IDF_PATH}"


    echo "Detecting the Python interpreter"
    . "${IDF_PATH}/tools/"

    echo "Checking Python compatibility"
    "$ESP_PYTHON" "${IDF_PATH}/tools/"

    __verbose "Checking other ESP-IDF version."
    idf_deactivate=$("$ESP_PYTHON" "${IDF_PATH}/tools/" export --deactivate) || return 1
    eval "${idf_deactivate}"

    __verbose "Adding ESP-IDF tools to PATH..."
    # Call to export tool paths
    # Allow calling some IDF python tools without specifying the full path
    # ${IDF_PATH}/tools is already added by ' export'

    idf_exports=$("$ESP_PYTHON" "${IDF_PATH}/tools/" export "--add_paths_extras=${IDF_ADD_PATHS_EXTRAS}") || return 1
    eval "${idf_exports}"

    __verbose "Checking if Python packages are up to date..."
    "$ESP_PYTHON" "${IDF_PATH}/tools/" check-python-dependencies || return 1

    if [ -n "$BASH" ]
        # shellcheck disable=SC2169,SC2039  # unreachable with 'dash'
        if [ -n "${path_prefix}" ]; then
            __verbose "Added the following directories to PATH:"
            __verbose "All paths are already set."
        for path_entry in ${path_prefix}
            __verbose "  ${path_entry}"
        unset old_ifs
        __verbose "Updated PATH variable:"
        __verbose "  ${PATH}"

    uninstall=$("$ESP_PYTHON" "${IDF_PATH}/tools/" uninstall --dry-run) || return 1
    if [ -n "$uninstall" ]
        __verbose ""
        __verbose "Detected installed tools that are not currently used by active ESP-IDF version."
        __verbose "${uninstall}"
        __verbose "To free up even more space, remove installation packages of those tools. Use option '${ESP_PYTHON} ${IDF_PATH}/tools/ uninstall --remove-archives'."
        __verbose ""

    __verbose "Done! You can now compile ESP-IDF projects."
    __verbose "Go to the project directory and run:"
    __verbose ""
    __verbose " build"
    __verbose ""

__cleanup() {
    unset old_path
    unset paths
    unset path_prefix
    unset path_entry
    unset idf_exports
    unset idf_deactivate
    unset ESP_PYTHON
    unset SOURCE_ZSH
    unset SOURCE_BASH
    unset WARNING_MSG
    unset uninstall
    unset is_idf_path_esp_idf
    unset is_script_dir_esp_idf

    unset __realpath
    unset __main
    unset __verbose
    unset __enable_autocomplete
    unset __cleanup
    unset __is_dir_esp_idf

    # Not unsetting IDF_PYTHON_ENV_PATH, it can be used by IDF build system
    # to check whether we are using a private Python environment

    return $1

__enable_autocomplete() {
    click_version="$(python -c 'import click; print(click.__version__.split(".")[0])')"
    if [ "${click_version}" -lt 8 ]
    if [ -n "${ZSH_VERSION-}" ]
        autoload -Uz compinit && compinit -u
        eval "$(env _IDF.PY_COMPLETE=$SOURCE_ZSH" || echo "WARNING: Failed to load shell autocompletion for zsh version: $ZSH_VERSION!"
    elif [ -n "${BASH_SOURCE-}" ]
        WARNING_MSG="WARNING: Failed to load shell autocompletion for bash version: $BASH_VERSION!"
        # shellcheck disable=SC3028,SC3054,SC2086  # code block for 'bash' only
        [ ${BASH_VERSINFO[0]} -lt 4 ] && { echo "$WARNING_MSG"; return; }
        eval "$(env LANG=en _IDF.PY_COMPLETE=$SOURCE_BASH"  || echo "$WARNING_MSG"

__main && __enable_autocomplete
__cleanup $?

I can run my build command fine if i do:
docker exec -it test /opt/esp/ build
But the problem with this is that i have to wait to the entrypoint script to finish.

But, finally i believe i found what i was looking for:
If i do:
docker run -dit --name test -w /project espressif/idf
i have a bash with loaded but in the background (-d, detached).
Then i do:
docker attach --detach-keys="%" test and i go back to my nice terminal where build works.
With % i go back to my host, and vice versa. So, i guess this is like i have only one terminal running in the container with entrypoint loaded and i attach and de-attach to that.

I think I know what the misunderstaning is. The entrypoint is only for preparing the application to start. When you use Docker exec, the entrypoint will not run, only your command. It is like using different terminal windows.

If you want environment variables to work when using docker exec you need to declare it in the Dockerfile like this:

ENV PATH="$PATH:/path/to/bin"

When you used docker attach you attached to the same “terminal” in which you oiginally started bash.

If you use a volume, you can just use docker run, save everything to the volume and remove the container without running it in detached mode.

docker run -it --name test -w /project -v "$(pwd)/data:/data" espressif/idf build


docker run -it --name test -w /project -v "data:/data" espressif/idf build

Just make sure you save everything in /data.

Is this container for development? If it is not, I would just add the python build command to the Dockerfile.

Yes, this is a possible way, but as you see from the the env variables are much more complicated to set “by hand” in the dockerfile.

docker run doesnt work for me as the mounted volumes are not synchronised because i use docker context to a remote docker.

For now, attach is the best way because i have one “good” bash terminal that is set up with the and has all the env variables ready.
The only addition it would be nice to have, is to send the commands without entering “inside” the docker’s container terminal. As an example:
I create my container like this:
docker run -dit --name test -w /project espressif/idf
and use docker attach test to run my nice commands, like build and everything works but from inside container’s terminal.
But is there any way to the same without entering containers terminal?
Like this: docker attach test build, similarly to docker exec test build?

It is like @rimelek wrote: docker exec creates a new process (=sibling process to the process created by the entypoint script) inside the container. Since this new process is not a child process of the process created by the entrpoint script, no variables declared within the execution of the entrypoint script are available in the other process.

I am not sure if prefixing the variable assignment with export helps the situation, as export makes the variable available to the parent process. Though, since the process from the entrypoint and the process from exec do not share the same parent process, I would be surprised if the exec process would be able to see the variables.

You could still test it with one of the variables, if exporting them makes them visible in the exec process. After updating the entry point script, recreating the image and starting a container, you can use docker exec ... env to check whether the variable is available.

If this doesn’t work, then I am afraid you already know your options:

  • either declare the variables as env variables when creating the container (`-e key1=va1 -e key2=val2…)
  • use the exec command you already figured out: docker exec -it test /opt/esp/ build
  • stick to docker attach

I meant only to change PATH as you could see the example in my previous post so at least would be found… If you need other variables too, you need to run the export script again. A simple would solve it copied into the image. Te script would run the command that you want and export variables before it as it is in the entrypoint. Or you can do what @meyay suggested reusing the entrypoint. That is the easiest actually.

Many IDE support file syncronization. If you don§t use any IDE, you can just use rsync

You don’t have to rely on Docker.