The comprehensive approach to Docker and Web App Development

As I write this first line, I already fear this is going to be too long for anyone to follow. That said, I’ll do my best to keep this as short as possible, and ask my questions as concisely as I can given that i’m just one step above novice with Docker at this point. I appreciate your patience as you wade through my quagmire (giggity).

Okay… so I’ve been using Docker for about 3-4 months now (time has flown), and I have a decent grasp of how it works. I’ve deployed several Docker containers into (web app) production, but only in a very rudimentary sense… I think maybe one of my deployments has separate db and web server instances, and the rest were such simple sites that I they are self-contained LOMP/WordPress (O=OpenLiteSpeed) containers. But I’m primarily a web APP developer, not a WordPress deployer… I build ColdFusion/Lucee sites/apps custom made for clients that need it.

I am fully aware that the power of Docker comes in having well-defined, single purpose containers, and have been working my way up to that.

But now I’m faced with some old-dog-new-tricks dillemas… primarily, how to develop atop a Docker base configuration, deploy to production, and then incrementally update my code/application in the most efficient way possible.

My old way, (beyond the days of FTP/sync), was pretty standard (I think?):

  1. Install web server, lucee, and database on my local machine, and configure them for my environment, keeping track of settings that would need to be replicated in production.

  2. Copy my base code framework (e.g. CFWheels, or perhaps Mura if I wanted a kick-started CF-based CMS) into the root directory of my web/lucee server, including deployment scripts i’ve written that would allow the production server to pull changes from BitBucket when a webhook at BitBucket was called upon push of master/with tags.

  3. Initiate a git repo with this base config, and push to bitbucket.

  4. Check out feature branch, etc., develop, push.

  5. When ready for the first production iteration, set up the production environment as close as possible to dev, making note of differences in settings.

  6. Pull the git repo to the production server and get the app running live

  7. Set up and test bitbucket to use webhooks to call my deployment scripts on the production server, whenever a certain push to bitbucket is made, etc

  8. Once in production, now the fun begins - updating code locally, and when tested/working/feature enhanced, push to bitbucket, triggering a deployment script to pull from bitbucket to production.

  9. Manually track DB schema changes or use iterative SQL scripts (my work over the years has rarely needed advanced db schema management, there just were never that many changes all at once, not enough to lose track of, anyway), and simply make sure that changes to dev and production environments were well documented in case they needed to be recreated, yadda yadda.

  10. Once in production, I would occasionally pull down a production database snapshot to use locally.

This system by and large has worked okay for me, and I’m sure many can identify with the process or something similar. To be blunt, as a sole developer rarely if ever part of a team, it’s unlikely that you ever feel you need Docker until you really realize just what it does.

But then there’s the “cool” factor combined with all the usual benefits of Docker, and your world changes… as mine has =) So I’ve reached the point where i need to figure out how my process needs or shoudl change, with Docker in the mix.

I started by simply making sure I could keep working, but using Docker instead of my local machine as the basis for my work. Went so far as to make sure Apache and MySQL were not even starting when I boot my machine, and built Docker images accordingly for each project.

To prevent needing to rebuild my images all the time, I instinctively used bind mounts (e.g. /my/local/dir:/my/container/dir) so that my web code changes would be seen and served immediately by the container.

And I have stuck to my use of git/BitBucket, only now, in addition to my code, I have the Docker/docker-compose config/directories in source control as well.

What I’ve struggled with most, however, is how this all relates to the production environment. If i stick with the same config in prod, it’s still pretty cool because my dev and prod environments are essentially exactly the same, and my git/deployment is still just mainly for the app/code updates. And this would probably work going forward… having the bind mounts in production, the database in a bind mount or perhaps a regular docker volume, etc. It’s one way to skin the cat.

However, it’s got that “smell…” you know the one… where you feel like you’re just not doing it Quite Right, and in this case, not using Docker’s full potential, or perhaps even abusing the privilege altogether.

The first hint of a bad scent was when I had to take an app that had never been in production before, and deploy it… and I realized that not much had changed. And when that happens, you immediately question whether or not the extra layer of complexity you’ve added (no matter how simple it actually is) is worth it. I mean, i’m not sharing dev with anyone else, so the benefits of Docker for me need to be in the workflow. And if I still have to take all the same steps, except more now (prep host for docker, set up initial directories for it, etc.), then what am I really doing?

So I started reading, and have concluded a few things:

  1. I will likely benefit from having my deployed docker images be essentially self-contained, meaning that, contrary to my development environment, my production images will have my code “baked into them”, unmutable in production, and using regular volumes for data/files that change.

  2. It seems i’ll need to use either the “builder pattern” - having multiple Dockerfile and docker-compose.yaml files, and some scripts for building and running, that depend on environment variables to determine how the images are ultimately constructed… or, I’ll need to use the multi-stage build feature/approach to accomplish what I think i need to.

  3. I find myself wondering how anyone tracks the changes in the environment on their local machine to ensure that those changes are preserved (via volume/bind mount). In particular, for one project, I ended up writing scripts executed in my Dockerfile/build process that would take a container’s “original” config files, copy them to a “conf.default” directory, and then if the files didn’t already exist in the bind mount, copy them back so that there would be an initial starting point… but that seems awful hacky and fragile overall… especially each time you discover a new directory you need to preserve. Using Docker’s ability to see “what’s changed” in the container compared to the image can help, but boy, it sure seems to add some steps.

  4. I’m not even sure how to ask these workflow questions, and pretty much suck at it. These are complex problems, at least for me. I seem to be the odd guy out, in that I’m not using most of the languages/platforms upon which so many examples (and questions) out here on the 'net are based. It’s just standard web app stuff, not code that needs to be “built” so to speak, so I dont (think I) need a lot of those extra steps… but the steps I do need, i’m not at all confident i can implement “properly.”

  5. It seems a starting point would be to simply list all the things about a project that “could” change in production, which basically narrows it down to transient data (like caching, which might be convenient to preserve between container/host reboots), user files/data, and the database… and then a list of things that change “once” so to speak, but shoudl be easily accessed in dev environment but “baked into” the images used in production… and then, figuring out how to push initial deployment and then all future enhancements and fixes, without substantial downtime, as I am usually able to do with my prior git/bitbucket deployment methods.

In short (and this probably could have been my whole f#$King post), I need a recipe: Given my typical project, with Linux, OpenLiteSpeed, Lucee/Tomcat, MySQL or MS-SQL, and the need to deploy with similar convenience to how I used to do it with git/bitbucket/webhooks/deployment scripts, how do i get the most out of Docker, hopefully in a way that allows me to also add nodes/replicas in production (a whole other new, scary and exciting layer for me) for handling traffic increases, without constantly building images (downtime?) or the potential to lose data or config information/settings?

Thanks for reading my novel. Rest assured, if I get it figured out in my own head, I WILL be sharing it with the world in some way, as I can’t be the only one struggling with this stuff.