Help with (De containerising my docker image) Complex

I have made a container image of a Ubuntu OS from scratch using docker I have then used a LLM to then modify it to boot up with specific set of rules VPNS lisences and software configurations. Also included are some start-all.sh scripts which will run the apon boot up.

Now this docker image is then pulled and pushed for use on a AWS EC2 instance except for one problem I have created the AMIs and EC2 instances in the wrong location (half way across the world) I need them in the London location, now after speaking with a AWS technical support they agreed the only way to do this other than rebuilding my whole start up script and configuration (would take months) would be to somehow De containerise my scripts and AMi

Something along the lines like this.

It sounds like you are looking to “flatten” or “export” a Docker container.

When you want to take the final state of a container—with all your dependencies, data, and configurations unpacked—and move it onto a host machine (like an AWS EC2 instance) to run as a native process rather than inside a container, you are essentially performing a filesystem extraction.

If you built a Docker image and pushed it, why not just pull and run it on another instance?

And if you already use LLM to modify your config, why would it take months to adapt it?

Q. And if you already use LLM to modify your config, why would it take months to adapt it?

A. Regards to the LLM it took a good month of try and failing then retrying again but I have exactly how I want it now, so I want to keep this exactly as it is just not use docker so I am not running a virtual machine inside a virtual machine. Please note I am a single user on a single machine so there is no need for it.

Q. If you built a Docker image and pushed it, why not just pull and run it on another instance?

A. I don’t want to use docker at all, correct me if I’m wrong but it’s not needed for my use case?

And thus causing overheads that are not needed.

All I have lots of custom scripts initiated after I SSH into my EC2 for example - SSH then ./start-all.sh

I want to include all this as well or there is no point in de containerising. If I simply just pull a run the image it could not include the top plus I would just be doing exactly what I’m doing now??

This is a Docker forum, as a human I don’t understand why you post here.

Because I’m trying to reverse all the work I’ve done with docker but keeping all my data and configurations intact. Thought this would be the best place to start for info.

Ok, understood. You can use docker cp (doc) to copy files from inside a container to the host file system.

Wouldn’t a Docker Export be better?

docker export will export the whole file system. Not sure how you would deploy that directly onto a new server, if you don’t want to use containers anymore..

Good, because Docker runs containers by default, not virtual machines. But I assume you just meant you don’t want an extra layer when you already have a VM just for this service and you don’t want to install anything else on it.

Docker can make things easier even if you are the only user for a single service. But it can also make things harder if you have to make a Docker image and you don’t have an already built one. But I can just agree with @bluepuma77 regarding docker cp. By the way. It might not be enough as configuration or dependencies could be different inside and outside containers. Depending on the used Linux distribution and already installed libraries. Using a Docker image on another host is usually easy. Containerizing an application is often not easy. “decontainerizing” it could be similar. It all depends on the exact application and Linux distribution in the container and on the host.

Hi thanks for your comment, yes you are completely correct I want to stop the effect of having two containers inside one and another.

So to be exact my (custom) docker image is built on Ubuntu 24.04 and the image I am trying to copying everything into is also Ubuntu 24.04 (on a AWS EC2)

Baring in mind when PWD or LS once SSHd in my start-all.sh script is there along with other VPN.sh etc etc.

What would be the best way to go about this?

Still the same. You know what files you need hopefully, because nobody else will. You copy the application files out of the container, which doesn’t even have to be running when you use the docker cp command. You move the files to the right location on the host and find out how you want to start the application. Automatically whe the host machine starts or manually. If automatically, you will need create systemd services.

If you have a webserver, the easiest way is installing the webserver on the host using “apt install” and you only copy the application code to the document root of the webserver. You may need to configure that webserver.

In some cases you may need to install some runtime libraries. It would be probably hard to copy required libraries one by one from the container and I would not copy those to the system folders on the host, because you will not know what libraries you installed and I guss you could also accidentally override some libraries when later installing from apt repositories..