Hello I am not sure I am even on the right track. My goal is to deploy and run a Python\FastAPI end point to service a React front end application. The React app runs on nginx. My search eventually lead to Docker images and containers. So my first question is, can I deploy a Docker container to nginx or do I have the whole concept of how Docker works wrong? Also, I followed this tutorial FastAPI in Containers - Docker - FastAPI and when I issued the ā>docker runā command the process dies immediately. The tutorial is for Linux but I am on a Windows 10 laptop. Is that an issue? Also, my dev laptop is Win 10 but the application will be running on Linux. So I have some environment and basic concept issues. Would someone point me in the right direction?
What do you mean by deploying a container ātoā nginx? Nginx is a software which can run inside a container. You still need to develop your application to wok with Nginx, but the Nginx itself can run inside a Docker container. The Python app can run in a container based on a Python image, and the React app can run in an other container with Nginx.
Well, we donāt know until you share at least an error message, but yes, it could be an issue. I would not recommend to start your first Linux containers on Windows. You will find multiple differences and issues before you could learn the basics. Docker Desktop is a good tool to play with Docker or use it after you learned how the open source Docker work, but I would follow a tutorial on Windows made for Linux as a beginner.
Hi Rimelek,
I do seem to misunderstand how Docker works. I conceptualized it as a docker container being run inside nginx. But what you say is the opposite. Nginx runs inside Docker as part of the image that also contains the application, yes?
There is no error on the command line. Is there a log file?
Both tutorials I went though created the container. But when I do a >docker run it exists immediately. I need it to stay running because it has an http listener on port 8000. fyi, I changed my Docker desktop to run Windows container and rebuilt. I think this is right.
When I run uvicorn directly it works fine
uvicorn app.main:app --reload
But with Docker there is a problem. Do I need to change some of the parameters?
docker run -d --name picontainer -p 80:80 dockerpoc
Here is my current Docker file:
FROM python:3.8.8
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./app /code/app
CMD [āuvicornā, āapp.main:appā, āāproxy-headersā, āāhostā, ā0.0.0.0ā, āāportā,
The end of your Dockerfile is missing, but please, use the </>
button to share codes. Otherwise your code could be altered by the markdown filter.
You have to understand what a container is. We say the process āruns insideā a container because it is easier to talk about it. A container is actually just a way to isolate a process from its environment. You your nginx server still runs on the host. You can see it from the host (Docker Desktop is a little different), but nginx canāt see anything except its child processes or other processes that you run in the same isolated environment. So you just tell the operating system what you want nginx to see. Sometimes we also say it is an alternative to Virtual Machines. It is a different level of isolation.
I recommend you to start from the beginning: Introduction to Containers
But the best way to understand it is trying to use very basic commands. You can find examples in the tutorial I linked above and I also have some examples here: Welcome to Learn Dockerās documentation! ā Learn Docker documentation
I also have some examples there to use LXD. I found useful to learn about LXD because you can run LXC containers and also virtual machines with it, so you can see the differences. At the beginning, Docker used LXC to run containers.
The command lineās output is actually the ālog fileā but when you run the container in the detached mode using -d
, you wonāt see it. You can use docker logs containername
to see it. You can find it in every noteworthy tutorial which explains the basics and not a specific app.
Usually a process inside a container sends its error messages and other outputs to the standard error and standard output streams (``/dev/stderrand
/dev/stdout) which can be read from the host. When you run
docker logs containername`, you get the content of a file containing a list of json objects saved from the containerās standard streams.
A container is running as long as the process is running in it in the foreground. So you need to run the webservers in the foreground, not the background, because in that case there is nothing to keep the container alive. If the process fails, then it stops and the container stops too.
It could be confusing, but yes, you need to run the application inside the container in the foreground, but you can run the container in the background (detached mode)
Try to run bash in the container first and run uvicorn manually from that bash:
docker run -it --name piconatiner -p 80:80 dockerpoc bash
Note: I replaced -d
with -it
so the container runs in attached mode (foreground) and has an interactive terminal so bash can keep the container alive. Then run uvicorn as you would run it from the host.
Since donāt completely understand how Docker works, I shoudl mention you can list the running containers by running:
docker container ls
And with the stopped but failed containers:
docker container ls --all
For example if you run a container in detached mode but you thought it was stopped, it is possible that it was running or at least existed so when you tried again, it failed to start because the port was already used. but in that case you would have had an error message.
I will go over the learning links you provided, thank you. I also found a cheat sheet for Docker commands. I did find that error with the missing ]. Once I fixed it the process stayed up. But I still canāt access the test application with https://127.0.0.0:8000/docs . Do I need to do binding between the ports on the docker container and the application?
I just found an issue. For local host the ip is 127.0.0.1 not 127.0.0.0 I can access it now. Now I know it works in windows. I assume I need to repackage it to run on Linux. I also added a port binding. >docker run --name picontainer -p 8000:8000 -p 443:443 dockerpoc
Yes. Windows Docker images donāt work on Linux.