I am working on making a Docker container/image for an ML project that also has a Gradio instance. I would like to be able to spin up the gradio page in the docker instance and use that from a browser.
I run the command to spin up the gradio page in my Dockerfile
but when I build, the process just spins, with no output.
Here’s my Dockerfile
:
FROM ubuntu:latest
ARG PYTHON_VERSION=3.10.13
# Use bash with login shell
SHELL ["/bin/bash", "--login", "-c"]
RUN apt-get update && apt-get -y upgrade && apt-get -y dist-upgrade
RUN apt-get install -y git
# Install base utilities
RUN apt-get install -y build-essential \
&& apt-get install -y wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && \
apt-get -y install sudo
RUN apt install -y software-properties-common
RUN add-apt-repository ppa:ubuntu-toolchain-r/test
RUN apt-get update && \
apt-get install -y gcc
RUN apt-get -y install libglib2.0-0
# Install miniconda
ENV CONDA_DIR=/opt/conda
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda
# Put conda in path so we can use conda activate
ENV PATH=$CONDA_DIR/bin:$PATH
# Initialize conda for bash shell and create .bashrc if it doesn't exist
RUN conda init bash && \
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc
# Accept conda terms of service
RUN conda tos accept
# Install huggingface-cli in base environment (needed for downloading checkpoints)
RUN conda run -n base pip install --upgrade pip
RUN conda run -n base pip install -U "huggingface_hub[cli]"
RUN conda run -n base pip install gradio
# RUN conda install -c nvidia cuda-toolkit -y
WORKDIR /app
RUN git clone https://github.com/bytedance/LatentSync.git
WORKDIR "LatentSync"
# Run the setup script with conda available
# Note: We need to source conda profile and handle the environment creation properly
RUN source /opt/conda/etc/profile.d/conda.sh && \
conda create -y -n latentsync python=3.10.13 && \
conda activate latentsync && \
conda install -y -c conda-forge ffmpeg && \
conda install -c conda-forge libstdcxx-ng && \
conda install -c nvidia cuda-toolkit -y && \
conda run -n base pip install omegaconf && \
conda run -n base pip install torch torchvision && \
conda run -n base pip install diffusers && \
conda run -n base pip install einops && \
conda run -n base pip install matplotlib && \
conda run -n base pip install imageio && \
conda run -n base pip install opencv-python && \
conda run -n base pip install decord && \
conda run -n base pip install kornia && \
conda run -n base pip install insightface && \
conda run -n base pip install onnxruntime && \
conda run -n base pip install onnxruntime-gpu && \
conda run -n base pip install ffmpeg && \
conda run -n base pip install transformers && \
conda run -n base pip install soundfile && \
conda run -n base pip install accelerate && \
conda run -n base pip install deepcache && \
pip install -r requirements.txt && \
apt-get update && apt-get install -y libgl1 && \
huggingface-cli download ByteDance/LatentSync-1.6 whisper/tiny.pt --local-dir checkpoints && \
huggingface-cli download ByteDance/LatentSync-1.6 latentsync_unet.pt --local-dir checkpoints
# Set the default environment to latentsync for subsequent commands
ENV CONDA_DEFAULT_ENV=latentsync
ENV BASH_ENV="/opt/conda/etc/profile.d/conda.sh"
EXPOSE 7860
# Make sure subsequent commands use the latentsync environment
RUN echo "conda activate latentsync" >> ~/.bashrc
# Set up the entrypoint to automatically activate the environment
ENTRYPOINT ["/bin/bash", "--login", "-c", "conda activate latentsync && exec \"$@\"", "--"]
CMD ["/bin/bash"]
RUN python gradio_app.py
The process just sits at the last step. When I run through the same process on WSL
(i.e. no Docker container), I get some output:
INFO:httpx:HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
* Running on local URL: http://127.0.0.1:7860
INFO:httpx:HTTP Request: GET http://127.0.0.1:7860/gradio_api/startup-events "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: HEAD http://127.0.0.1:7860/ "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK"
* Running on public URL: https://c1cd70f7568fb4478b.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces)
But I see no such thing on my Docker, why?
ETA: I noticed something interesting, when I run docker run --network host --gpus all latentsync
, I get this output:
...
INFO:httpx:HTTP Request: GET http://127.0.0.1:7860/gradio_api/startup-events "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: HEAD http://127.0.0.1:7860/ "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://cdn-media.huggingface.co/frpc-gradio-0.3/frpc_linux_amd64 "HTTP/1.1 200 OK"
But this gets spit out when I CTRL-C out of the process:
* Running on local URL: http://127.0.0.1:7860
* Running on public URL: https://58f46d47ffa6142aa4.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces)
Keyboard interruption in main thread... closing server.
Killing tunnel 127.0.0.1:7860 <> https://58f46d47ffa6142aa4.gradio.live
Is something blocking the server from launching?