Very newbie question, can't understand why Docker

Hello, I’m new on using Docker, and followed the get-started-stuff to undestand how it works and, so, what benefits I can achieve with Docker.
My initial idea was that Docker si a container with a full development environment, to avoid install all stuff on a real machine. But reading the documentation, I realized to be wrong, since looks like Docker is a container to only run an application, rather than develop it. So, now the question. I like to develop in Java using Spring framework. I also successfully containerized a simple application, and run it from Docker. But, if I develop, it means I have the whole environment on the real machine, and can test the application simply running it from there. So what is the advantage of using Docker? Maybe distribute and test the application on a computer with nothing installed?
Sorry if the question is too trivial, but I can’t get it.

Thank you

Not really, with Docker you can have the code for your application on your local machine and everything it needs to run in a container. Like this you can develop applications that require incompatible environments in parallel. This moment for example I’m editing in my local editor on WSL/Debian a program that will run on Ubuntu, and for this I’ve started an Ubuntu container, installed all modules as defined in the specifications, and mounted the code folder into it.

A “computer”, whether physical or virtual, needs an “operating system”, whether Linux or Windows or what-have-you. Docker is not an operating system in this regard. Docker is a program which needs to run in an operating system. Inside the Docker program environment you can run containers which are themselves miniature operating system environments. If you develop in Docker you still need an “operating system” for your “computer”. Now, I put those in quotes, because you could run Docker on Windows on a laptop, or run Docker on Raspbian on a Raspberry Pi, or you could run Docker on Linux in a virtual machine which itself runs on another virtual hosting platform. The concept of what is the operating system and what is the computer are quite malleable, but you will need an operating system on a computer in order to run Docker.

Hope that helps visualize it!

For beginners it’s often easier to start using Docker containers “as a virtual machine”. For example, by accetto you can find nice containers with Xfce4 desktop and and also some with pre-installed development environments and/or applications. You can use such containers also for development and testing. The image hierarchy of one set of images is here. Take a look.

It may be easier at the beginning, but it gives a wrong idea of what a container is. There are many questions in this forum from users that run into troubles because they think of containers as virtual machines.

1 Like

tekki is abolutly right!

Docker is oft called “application virtualization”. While machine virtualization aims to emulate a machine, Docker aims for process isolation on the machines kernel.

A container is a runtime instance of an image, which itself is a point in time snapshot of an os base image, an application, its depdencies and usualy entrypoint scripts that take care of modifying configuration files and starting the application.

The os base images, are merly a set of loosly commands and libraries that make up the core of the os. Containers are not booted, and as such no system processes are started - whatever in the entrypoint scripts is started is directly executed.

The beauty of Docker is that it requires less ressources than a vm:
– docker uses an isolated area of the host’s kernel vs. each vm has its on kernel
– system calls (access to network, storage, output to screen, keyboard actions, and way more ) to the kernel are executed directly on the host kernel vs. each vm has to go thru a layer that orchestrates and translates the system calls from the vm’s kernel to the host’s kernel
– cpu is only used on the process you specificly start vs. each vm runs all os specific services and process
– size of images: while a typical image is somewhere arround 100-500mb, a vm will have at least a couple gb.

I don’t agree with the a wrong idea part. Docker is, in the first place, a tool. And if a tool fits nicely the task I have to solve, then it’s also correct to use it. I agree, that’s not a mainstream, but for example containers from accetto (and others) help me daily by the following (far from a complete list):

Safe browsing on Internet. If I do research, I don’t know what kind of websites I’ll visit. I also care about privacy and I don’t like wellknown companies to track me. With Docker I can create light weight containers in seconds and then destroy them. The only limitation is missing sound support, but I don’t need it for this kind of work anyhow.

Evaluation of tools. I don’t want to install every unknown tool I want to evaluate on my workstation. Docker containers offer a nice isolated environment and I can keep my workstation lean and healthy much longer.

Proof-of-concept and testing. I can test any new ideas very quickly and. No need to re-configure my working environment each time force and back.

Pre-installed and pre-configured tools. I have a growing set of containers that I use as tools from my main environment. Not to mention, that I can use Linux tools on my Windows workstation.

Replacing VirtualBox. Docker allowed me to get rid of VirtualBox, which I’ve used for years. I love the tool, but I dont’t really need it currently.

It’s true, that Docker’s main usage is not as a virtual machine, but as a service. However, if the former usage so nicely solves so many daily tasks, it would be a real shame not to use it that way.

Otherwise I agree with @tekki, that incorrect usage and incorrect understanding cause problems. It’s just a classic question of right tool for the right task and no tool fits everything. :slight_smile:

This isn’t the case by using Docker containers as a virtual machine. The image sizes in the Docker’s listing are not real sizes, because many layers are shared. The real footprint on the hard drive is usually much smaller.

Otherwise I agree with @meyay, that there are important differences between Docker containers and “real” virtual machines like, for example, VirtualBox. It’s important to know them and to use the tools correctly.

I found this recently and retrospectively would have enjoyed reading it much sooner - related the concepts to Linux kernels/distros etc. Perhaps it might help? It inspired some fun debate on reddit about what a virtual machine conceptually is.

Link to Author’s Blog Article: A Container is NOT a Virtual Machine

Reddit Post: A Container is NOT a Virtual Machine

I think the Docker community does its newbies a disservice to tell them, even by accident, that containers and VMs are interchangeable. I agree that containers and VMs are tools to accomplish a task, but no one explains a crescent wrench as “like a hammer”, although you most certainly could start out pounding nails with one. I extend this idea to the overwhelming “containers are a replacement for virtual machines” mindset which was quite pervasive when I first started reading up on Docker. It should be explained, especially to the new users, that these are different tools and each has a place and an optimal usage. So is it wrong per se to tap in a loose floorboard with a big ol’ wrench? My opinion: Perhaps not if you know what you’re doing, or don’t know who to ask to find the better option, but also not something you’d ever want to knowingly teach an apprentice.

With that (personal opinion) in mind, I believe it’s more valuable to the initiates to be told the differences between the tools, so they can make better decisions on which tool will meet their needs.

I agree with all good points made in this nice conversation. I only want to make a few more remarks. :slight_smile:

In real life choices are not always yours and they are also not always about pure technology.

For example, VirtualBox and Hyper-V are not compatible and if you have to use Windows, you do not really have unlimited options to choose “the right” technology. Especially if you’re not a big fan of Hyper-V.

When I’ve mentioned using Docker containers as a virtual machine, I’ve ment mostly interactive usage of containers that provide graphical UI (desktop). That is very similar to using usual virtual machines.

Good points made in the blog post mentioned above are isolation and resource sharing. That is what makes both technologies so useful and also similar.

Otherwise, it’s really important to understand the differences. There could be few or many by creating and using containers or virtual machines, depending on the tasks you have to solve. In any case, however, learning Docker is worth it. :slight_smile:

raises hand I also am not a big fan of Hyper-V. I almost exclusively recommend VMWare for anything virtualization-related. ESXi is legitimately free for a single-server install, and Player is free for non-commercial use if you absolutely have to use Windows as your virtual hosting platform (although you can also just go get some cheap desktop box and deploy ESXi on it). And now they provide the PhotonOS VM which is effectively a 155MB Docker container download.

I will say that I’ve not yet had a great deal of need for a Docker container with a GUI; almost every Docker vapp I’ve deployed is managed via webgui (PiHole, Redmine, Wordpress, Mediawiki, MySQL&Adminer). I suppose I could see if one were to use containers with separate GUIs that could feel a bit like VMs.

And, I agree with your note about environmental variables and design choices not being yours. Nothing is a fit everywhere. It is hard to justify to a factory churning out product on an application stack that runs only on Windows that they should pay to redevelop everything from scratch on an open-source platform. More so when the Windows-based solution is supported by a fortune-20 company instead of a programmer with good intentions.

Thanks a lot for the hint about ESXi, @cincitech. I have to check it out.

However, coming back to the original question from @elbui3. :slight_smile:

Yes, you can definitely use Docker for development as you originally intended.

You can keep your source files and resources outside the container, if you mount external directories to the container. This allows you to keep you development environment inside the container, which you can easily update or change and keep your resource untouched.

You can also keep everything inside a container. Then you can copy the development results from the container at milestones and destroy the container. This is probably not used so often as the previous approach, but I find it pretty handy for quick prototypes and proof-of-concepts. This way you can also easily test several version of the same development tool, for example.

Using the second approach you can also avoid some pitfalls with file permissions on external volumes, especially if you’re on Windows.

You can also keep your development environment in one container and the resources in another one(s). This is most often used if your need databases or proxies, for example.

And finally, you can also use any combination of those approaches. This is probably the most frequent case.

Anyhow, you’ll not regret learning Docker, I believe. :slight_smile:

honestly i’ve been having the exact same question as the OP and was wondering many other things.
most of them were answered here so i’m truly thankful for your answers.
i was also wondering if i may ask you questions (to those more advanced users) in case i would have any further? thanks