I can create image, but image will not start

When I first installed Docker, I was able to create a container, and I was able to start that container, and work inside of that container.

But, I was not able to get networking to work.

I have been dabbling with numerous ways to create the container, and I can create the containers now, without any errors.

However, when I attempt to start the containers, I am getting errors.

In fact, now, when I try to go back and create one of the original containers, that worked in the beginning, it is also no longer allowing me to open the container.

Here are some of the create commands that I’ve tried:

docker create centos:latest doc_ice bash # Working

docker create centos:latest --name CCDA /bin/bash # Working

docker create centos:latest --mount /Docker_Apps/ICE:/Docker_Apps/ICE
/bin/bash # Working

docker create centos:latest --mount /Docker_Apps/ICE:/Docker_Apps/ICE
/Docker_Apps/ICE/opt/tomcat/bin/startup.sh # Working

NOTE that all of the above commands were initially working.

At this time, even trying to create the above containers again, when I attempt to start one of them, I get the following error message, or something similar to it, when I change between “–mount” and “–name”:

Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused “exec: “–name”: executable file not found in $PATH”
Error: failed to start containers: a91873eb27ca

I am NOT using a Dockerfile at this point.

I am creating the containers directly from the command line, and starting them manually.

Remember that this was working when I first installed Docker.

I removed, and reinstalled Docker several times already, but to no avail.

Going forward, I want to enable port mapping in each of my containers, and based on the research that I’ve done, each of the following commands should allow me to do this.

docker create centos:latest
–mount type=bind,source=/Docker/BASE,/Docker/BASE
-p 10.10.10.10:8000:80 -p 10.10.10.10:4400:443 -p 10.10.10.10:2200:22
-p 10.10.10.10:8800:8080
/bin/bash

docker create centos:7.6.1810
–mount type=bind,source=/Docker/BASE,/Docker/BASE
-p 10.10.10.10:8000:80 -p 10.10.10.10:4400:443 -p 10.10.10.10:2200:22
-p 10.10.10.10:8800:8080
/bin/bash

But, since I can not start any of my containers currently, I don’t know where to go from here.

I’ve looked at numerous different online forums, but so far, none have helped.

If anyone can help me to get this working, I would much appreciate it.

You can use the create command, but normally when you want to run something in docker you use the run command that does some magic for you. Basically the run command is create+start.

Using the command docker run --rm -it centos:centos7 bash shoud work to create a new centos container.

And note that a docker container is not a virtual machine. :wink:

Thanks for responding to my question.

But, in our environment, I want to use Docker like a VM.

I want to run multiple Docker containers in parallel, and I want them to be available all the time.

I will have several setup as parallel development environments, or development machines.

To begin with, I need to build the container, install an application on it, populate the application, and then, when I get it where I want it, I will check it into a GitHub repository, where our developers can checkout the current code, develop against it, then check in their changes.

Once they check-in and merge their changes, they can remove their working copy.

But, I will never use the -rm command myself…

I would prefer to use the “prune” command, to remove any obsolete, or old containers.

Now, back to the issue at hand.

My command syntax was incorrect previously.

I corrected the command syntax.

I can now create a container, and I can start the container without any command line errors.

But, when I attempt to connect to the container, I do get an error stating that the container is not running.

At least I now get a successful response when I create the new container, and when I start the container.

Also note that I was able to create, start and interactively login to my original containers.

I even installed one of our complete products in the original containers, as a proof of concept.

I am monitoring /var/log/messages, but I do not see anything that would indicate an issue.

I understand what you are trying to do. I won’t keep arguing with you.

Well… a little bit…

But you are, imho, doing it the wrong way. You should, with the Centos-image as a base, create your own Dockerfile that installs what you want to run and then build your own image from there. If you have multiple applications that you run, say Postgres, Apache HTTPD or any other, there are most likely already an image you can use to create your own stack.

I say it again… A docker container is not a VM. Have fun.

1 Like

Note that I enabled debug logging.

Here is the content of my /etc/docker/daemon.json file: (Assuming that 10.10.10.10 is the host IP address.)

{
“log-driver”: “syslog”,
“debug”: true,
“log-opts”: {
“syslog-address”: “udp://10.10.10.10:514”
}
}

Thanks for your response.

But, that brings up several questions:

If I use a base image, then effectively, I am starting from scratch, each time I create a new image.

My goal, our end goal, is to eventually place our code in a private Docker repository, with the latest fully tested application code.

Then, when we want to deploy to a customer site, we simply pull down the latest code to our customer server(s), where that code will run 24x7, until the next update.

This will make deploying our code to a customer site, so much more efficient.

In our environments, we want to run the code 24x7.

Most of our code resides in mission critical environments, that must be up 24x7, with minimal downtime between upgrades.

If I understand Docker correctly, I know that we can “run” an existing container, and that may be the way we need to go, once we have a stable container in place.

Or, that may be where we need to go, once we publish our own containers.

But, to get there, I still need to build the basic container.

Here is my basic “create” command:

docker create \
–publish=10.10.10.10:8000:80 --publish=10.10.10.10:4400:443 \
–publish=10.10.10.10:2200:22 --publish=10.10.10.10:8800:8080 \
–mount type=bind,source=/Docker/BASE,target=/Docker/BASE \
–hostname DockerBase \
centos:latest /bin/bash

How would you change this, if you wanted to “run” the CentOS image locally?

I would need to create our own repository for revision control, that is only available to our team members.

And, I would need to check-in and merge any updates performed.

I have not learned how to do that yet.

But, suffice it to say, that I can NOT lose any of the work that I put into the image, when I log out.

And that is exactly what you use the Dockerfile to do. You create a base image, with the installed applications, store that image in a repository, like hub.docker.com or a private one, and then, when you need to stand up a customer environment, you build a new customer image with the customer site files added, version handle that image and create a new container with that image. Then you can load balance the traffic to the new instance of the customer specific container. And if you find that the new version of the application has some error, you can do a rollback.

But by looking at the container as a server, you are doing yourself a disservice.

I’ve been running several 24/7 sites where we built our applications in docker images. Each image has its own responsibility. Using a Docker Swarm and Docker Stacks, we have been able to run several sites in the same cluster.

There are several really good whitepapers on how to do what you want to do and you are, sort of, on the right track.

Can you point me to those whitepapers?

Also, in our environments, we have multiple applications that do not contain any customer data.

Literally, we can run some of our existing applications at a customer site, without including any customer site files.

Of course, there are some applications that will need access to an external filesystem, which is why I have been playing with the --mount command.

I also just tested the same exact command that I used to “create” a container, but used the “run” command instead.

It came up perfectly! )))

And, I can connect to it from the same host server, in another terminal session.

But, I can not seem to connect using the port mappings that I setup.

This is the first time I’ve tried this, as this is the first time I’ve gotten the new container to run, with the new port mappings.

It looks as if the host OS is not listening on any of the ports I mapped to:

i.e. 2200, or 4400, for SSH connectivity.

Is there something that I need to do at the host OS level, to enable these ports?

I assumed that Docker would do that for me.

I developed, and administered containers for many of our development servers over the past few years, and I definitely agree that Dockerfiles are simple and useful for change control.
For instance, I clone my git repo to a host machine, and then test the changes I make; If they don’t work, I can then modify the source files in our git repo and clone them to the server again. This provides change control, AS WELL AS a non-modified host environment. It also provides an opportunity to create automation scripts such that you are always delivering your servers in the same way each/every time so no different use-cases are necessary to develop for, and your applications can be retained.
This also makes it very easy to clone an environment, as you can not only create multiple locations to draw git resources from, you can also pull them onto a client server to deliver them there.

I would also generally suggest utilizing an automation agent, such as Ansible if you have knowledge of it, as that improves the automation one can achieve substantially, and can be used to improve security and enhance automation by delivering ssh keys to the host and into the docker apps as well, though it certainly isn’t necessary.

I can execute a fresh server and have our product or development applications up and running in only a minute or two.

ADDITIONAL NOTE ON DOCKER NETWORKING:
To run a docker with a specific port mapping;
[1] Build your dockerfile, or pull an image to the server
e.g. docker build -f /tmp/Dockerfilev0.1 -t $DOCKER_TAG .
[2] Start the container;
docker run
-d --restart=always
–name $APP-$VERSION-test
–env-file=/dir/file.env
–user uid:gid
-v /local/dir/file:/container/dir/file -v /second/local/dir:/second/container/dir/file
-p $HOST_PORT:$CONTAINER_PORT
-t $DOCKER_TAG
NOTE the -p option above, this is how ports are assigned to containers.
Example:
UID=501
GID=501
HOST_PORT=8045
APP_PORT=8080
VOLUME1=/opt/app1/data:/opt/application/
VOLUME2=/opt/app1/config:/etc/application/
docker run -d --restart=always --name app1-v1.2.3-verification --env-file /opt/docker/app/v1.2.3.env --user $UID:$GID -v $VOLUME1 -v $VOLUME2 -p $HOST_PORT:$APP_PORT -t this-is-my-app

You can then review your running docker to verify ports and volumes are assigned using ‘docker ps -a’

I would like to re-iterate what has already been mentioned before though, Dockerfiles are definitely FAR easier to implement into change control systems and are quite useful to reduce complexity during the build process (if one does not use complex scripts as above :slight_smile: ).

2 Likes