Scripting WebSphere in Docker (or just in general running scripts in container from outside)

In our current infrastructure (non-Docker), I have a VM with WebSphere installed. We have automation scripts to deploy our applications - for the most part we do a series of calls to the WebSphere wsadmin shell script… these do everything from creating a JVM, setting a bunch of -D’s, then deploying the EAR.

I’ve stood up WebSphere using Docker (found some IBM articles and Dockerfiles for it), and my question is this:

Can our existing process of executing wsadmin still be used? That script obviously now resides inside the container. On the host I need to run a script, and that script will need to make calls to the wsadmin.sh inside the container.

What’s the best way to do this, or do I need to re-think how we do things? Most of the WAS on Docker tutorials have peopel deploy applications through the GUI, but we need to have it all automated.

Thanks!

you can send commands into the container.

docker exec container_id commandline

docker exec container_id ls /some/path

docker ps will give u the container id.

Thanks. That will obviously require some script changes but it’s doable. Currently our scripts have a central configuration file that describes the application and the environment, and one of those is the path of websphere, ie /opt/IBM/WebSphere, and we use that value and concat something like “/bin/wsadmin.sh” to it, and exec that full path… we obviously won’t be able to do that, but I can add some conditional logic to do things differently if we’re deploying to a Docker websphere vs. an installed copy on the VM.

you should be able to stick the docker exec container_id in front of your command,
if it lives in /bin inside the container…

you could map a local volume into the container too, so they don’t have to be copied INTO the container itself.
(allows you to fix the scripts without modifying the image definition., or restarting the container)

Whatever you do, make sure you have a plan for when the container will be stopped and deleted. In my experience this happens pretty routinely for pretty mundane reasons (you need to deploy a new version of the base image with updated software; you need to change startup-time settings like published ports). Any changes you make with docker exec will get lost.

good point… any changes made inside the container are not saved for subsequent instances.
you CAN ‘resume’ the a container, and it will see changes made when running last.