Right now I’m using a single Vagrant VM on OSX/Virtualbox provisioned with a bit inline shell but mostly Puppet on a 14.04 guest.
The stack within is a mix of NodeJS, PHP, frontend servers but also strictly background job processing, Postgres, Redis, ElasticSearch and nginx. I’m exposing a single IP in a private address space but with multiple subdomains, i.e.
api.dev.domain.tkd, etc. and with
https://<url> everything is accessible because those
*.dev.domain.tld to map to the private IP (but we also may just use
This all is managed via ~10 code repos (services), one which defines the Vagrant machine setup and the others being application/source specific for the various services within. This git repos are cloned below the vagrant machine repo and thus mapped directly into the VM.
As of now, deployment does not involve docker or anything else. It’s all simple git pulls in production and some custom commands to trigger clearing caches so forth which worked well so far .
Everything in the picture is showing it’s age The Ubuntu version, the concept that everything is running in the same VM whilst easy to start becomes a liability especially when you cannot simply upgrade a shared package (e.g. NodeJS or PHP are used by multiple repo but not all may support the same NodeJS/PHP version “yet”).
There’s a possible plan to just continue this road: bump the base image to 18.04 and adapt to the latest versions (it will be an extensive adaption because puppet changed quite heavily).
Then there’s the idea to make not just the code but, for a start, local development less monolithic. E.g. be able to run different NodeJS services in their own Docker container so they may easily have different runtimes and not having to use something like nvm. Same for PHP. It’s possible and thanks to PPAs easy to install multiple PHP versions on the same machine. But OTOH this monolithic approach doesn’t feel right anymore. Also, in production services are split over different servers (e.g. database services are running on dedicated servers, background jobs running on dedicated machines where there’s no web server, etc.).
Other challenges are also coming from the frontend developer side. React/Webpack/TypeScript so far has had to be run on the host system even with the Vagrant setup because this stuff is simply way to slow in the vagrant machine.
To give an idea, the current local dev directory structure looks something like this:
/vagrant-repo/ Vagrantfile code-repo-1/ code-repo-2/ … code-repo-n/
These code repos are cloned into
vagrant-repo physically on the developers machine; they’re not using git submodules.
My initial idea was to start next to the Vagrantfile a
Dockerfile but this would simply mirror the current approach.
Rather I think each code-repo should have a
Dockerfile and the root uses a
docker-compose.yaml to pull everything together.
But before I attempted this I’m also seeing issues I’m not sure how to correctly approach.
code-repo-1 may run as
https://api.dev.domain.tld. But they way I understood it, each Docker instance would run their own nginx would expose the service under different port rather then a single nginx instance with multiple site definitions.
There are also other challenges like having the need to run a web dashboard for the background job accessible via https which however does not run within the same machine the background jobs are running. Data is shared exclusively via a Redis instance. This was simply slapped into the monolithic VM but there’s no “dedicated code repo” with a place for a
Dockerfile for it. I think I would need to createa
Dockerfile.dashboard for this in the
vagrant-repo, referenced by the
Another thing is developers experience. One nicety with the monolith is that we provide a single command within the VM to access a lot of common and not so common stuff, spanning code repos and their individual challenges. But there’s a single entry point which makes it easy for every developer to access anything. You only have to remember this one command, it will help/guide you. Tasks this command was doing are: running tests for a code repo, trigger deploy code, restart workers, download specialized database dumps for initial dev machine seeding. I’ve literally no idea how this could work at al with this docker setup.
So far in the past I’ve only dabbled with docker and still don’t know if it’s the right approach. A lot of unknowns I’m seeing and a lot I’m not even seeing yet.
Then there’s this question mark about performance, e.g. NFS Native Support . This topic isn’t new to me and I spent enough time with the old Vagrant setup to have acceptable speed (it involved having to create custom bento based ubuntu images for a proper kernel configuration and tuning the the NFS settings in Vagrantfile with every hack known back then).
It would be great from any developer with experience what they think about this setup and approach. A valid outcome certainly is that docker may not be a good fit here. Or that habits simply need to change.