Mount new volume on existing running container

If I have an existing container and I have created a new volume with this command:

docker volume create --driver local --opt type=none --opt device=C:\test-server --opt o=bind new-volumen-name

How can I mount the new volume in the docker container without rebuilding it?

The idea of ​​mounting it is to have access to this container directory:

/var/www/html/directory_project

I did a little research and although docker container update exists, it does not allow me what I have suggested above.

I have tried to use the following command:

docker run -it -v new-volumen-name:/var/www/html/directory_project --name LAMP-Container

Output:

“docker run” requires at least 1 argument.
See ‘docker run --help’.

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG…]

Create and run a new container from an image

You can not rebuild a container. You can rebuild an image, and you can remove and (re-) create a container.

Containers are supposed to be disposable by design and hold no state in the container filesystem. That’s what volumes are for. You only depend on the state of a container if you are doing something wrong and don’t use things like they are supposed to be used.

Your 2nd post indicates you lack an understanding of docker basics. The command in your second post lacks an image (which must have been present in the container you refer to in your first post, as you clearly managed to create a container).

Recommended links to learn the basics and concepts:

My intention is not to recreate anything… it is to use the volumes as you indicate but I find myself unable to do so.

What good are volumes, if I can’t mount them, in a container after it has been created

if volumes are supposed to be the means to preserve data and configurations…not being able to mount them when and how I want to make the container work how I want…then this doesn’t work as expected or There is something they haven’t told me or I haven’t found.

Containers are made to be easily removed and reconstructed

What prevents you from removing the running container?

I want to be able to add new directories and projects to the same container to save resources on my local… via command lines…

Under normal conditions this is achieved with configuration files like vhost… but this is not the problem… I have already managed to maintain the vhost configuration…

The real problem is maintaining the directories where the projects that come out over time are or are located… so… I imagine/asume that every time I remove the container and re-create it, I will lose the volumes attached by commands… since these do not exist in a container configuration file.

If you have files you’d like to persist, and have not mounted a volume to them, you can use docker cp containername:/file/path /host/file/path to copy them over to the host machine before removing the container and setting it up to mount that data, so it does not disappear with the container

As for running the same container configurations, you should use Docker Compose, which would allow you to remove and recreate a container in a snap, it is basically docker run in YAML format

initially everything comes from a set of docker and YML files

The other idea that occurs to me and I’m not sure it works is to convert my current container into an image, use it as a base and add the new volume over and over again, the problem would then be that it recognizes all the volumes previously used. but no idea how this could be done… using commands.

As for adding new directories and projects, consider that containers were made in order to separate processes into their own environments, each service should have its own container, they should be simple

For example, when building a full-stack project, you’d usually have a container for the frontend and a container for the backend, and maybe a development container

This is possible, but it is not made for what you need, it’ll work for you but it will be wrong practice and cause many issues in the future, as well as require you to commit the container every time, in short, not a good idea at all

Within the image, you want the packages, libraries, softwares required to run your service
Within the volumes and bind mounts to keep the persistent data

1 Like

no, I really have a container for only PHP 7.4… but I have 3 projects running in this version of php… for me locally having 3 php installations does not make sense and even less so if due to my modest hardware I need to save resources… .

but that’s exactly what I’m trying to do…

Within the image, you want the packages, libraries, softwares required to run your service

1 PHP7.4 container service.

Within the volumes and bind mounts you keep the persistent data

3 projects and the possibility of adding more. on real time…

Use a VM then, Docker is not made for that

1 Like

:smiling_face_with_tear: :smiling_face_with_tear: :smiling_face_with_tear: :smiling_face_with_tear: :smiling_face_with_tear:

Too bad, I had my hopes in Docker… a VM machine consumes the same or more resources XD.

Docker does not consume as many resources as a VM, and should satisfy your needs, should you work with it correctly

Docker does not consume as many resources as a VM, and should satisfy your needs, should you work with it correctly

That’s what i try

1 PHP7.4 container service.

1, 2, 3 projects and the possibility of adding more. on real time…

What you want is a miracle container that should do everything in a single container.
Though, what you need is 1 base image + one image per project, and a container per project image.

Since you don’t want to use docker the way it is supposed to be used, you struggle with problems that don’t matter if it would be used properly.

There is indeed one modified docker engine that I am aware of that actually allows to add/remove volumes and environment variables on an existing container: Synology has a customized docker engine in their Container Manager package (which of course only runs on Synology NAS).

I see following options:

  • Buy a Synology and use the Container Manager to get the feature you want
  • Run your stuff in a VM without docker
  • Start to use docker how it is supposed to be used

You should think twice if bending docker, an industry standard that exists since 11 years, to your will is really the better approach, than actually learning how to use it properly.

Update: this post didn’t age well: Synology removed the ability to modify existing containers in Container Manager in 24.0.2-1535.

1 Like

I am doing it correctly… in theory I should implement commit (I found this yesterday.) and then attach the new volume every time I need to include a new project… the projects are new volumes… not containers… since I project them as they are a development , which in the end in theory should be preserved as data…

I don’t understand why they want to twist things and pigeonhole them into their own concept… when the tool in the end is a toolbox that allows us to satisfy the needs of different environments…

Ask yourself: is it possible to do this with this tool and search among its possibilities and characteristics, it is not bad…

What I do see wrong is saying right and left… don’t do that because the one who created it or because I consider that it should not be used like that…

You are right, I forget option 4: ignore every recommendation and good practice, and do whatever you want. No one can force you to use it like it is supposed to be used. I tried, you resist. Fine by me :slight_smile:

Good luck!

The reason you do not commit containers is because your images and containers can be lost any time. Write a proper dockerfile specifying your image’s needs and build the image on it

If you find you need new dependencies, rebuild the image and remake the container

The volumes (thoguh for projects, more appropriate are bind mounts) are indeed how you preserve the projects and only mount specific ones into the container, as I have explained