We have Windows Server 2008R2 and Windows 2012 VMs running .NET apps built over several years. They utilize DLLs built using System.Configuration to read a config file with passwords, database connection strings, environment variables to control application behavior depending on the environment (DEV, UAT, PROD, etc), your typical legacy grab bag that makes it difficult to migrate to the cloud. DLLs shared among multiple IIS apps/sites are stored in the Windows Global Assembly Cache.
Docker seems like a great solution to extract out each site (or even a specific app within a site), build a self-contained image and deploy that on, say, AWS ECS with all the benefits thereof.
The guide here is a good resource. There is even Image2Docker which magically builds Docker images directly from a VM.
But they don’t talk about how to include global assemblies in the image.
Any ideas?
Thanks
What do you mean? You just install those assemblies just like you do right from command line. Just do it inside Dockerfile
Well, my docker build is running on Windows Server 2016 but all my IIS & .NET apps are on Windows 2008 & 2012 servers so how can I do it inside the Dockerfile?
Please bear with me, I am still trying to wrap my head around this.
You would need to write up DockerFile which informs docker engine how to build your image, that is what is base image, what needs to be done to it to change it for your application etc. I think whoever created Image2Docker
done a horrible job to community since it does not really speed up anything but instead causing this sort of scenario where people don’t understand how docker works in general and think that it’s sufficient just run some tool to move to Docker.
I suggest start with some tutorials for Windows containers how to build basic images etc to understand the concepts and don’t use Imade2Docker it does not really do much once you learn how windows container work.
Good point, I see what you are saying. Would you recommend using Visual Studio 2017’s built-in Docker support during our development, unit-test, build cycle instead of retro-fitting it later? This tutorial describes the steps.
For the same reason I would not recomment using Image2Docker
I would not recommend using Visual Studio as well. It hides complexities how Docker works and end result users still not familiar with Docker concepts. I would recommend using Visual Studio Code instead and follow probably quickstarts or docs here (https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/).
Understood, I have a lot of learning to do, will get on it. Thinking containers certainly takes a different mindset when you are used to the old way of building .NET apps.
I guess what I am struggling with is trying to understand the right level of isolation to strive for. As stated, our current Windows 2008/2012 VM has a monolithic IIS with all types of cross-app, cross-site dependencies, Global Assembly Cache, call outs to other Web APIs in the network, read/write access to host filesystem, etc. Would we need to re-factor (or even re-write and regression test!) all this or does Docker offer a magic wand so we can surgically carve out specific sites/apps (along with all their dependencies) and build as a portable Docker image?
Few more questions - Does your excellent tutorial about Windows Authentication still apply or have there been any enhancements to Docker for Windows in this area?
What is the recommended pattern to access an external Oracle database from inside the containerized .NET app? Which Oracle client driver to use, how to open ports to connect to the Oracle database listener, how to deploy the TNSNAMES.ORA file so the connection strings used in the app can resolve to the appropriate database including DNS lookups and such?
P.S.: Why do you recommend using Visual Studio Code? That’s just an editor, right? How does that help here?
On a related note, while I agree that using tools like Image2Docker and VS with Docker support hides the complexity, once we understand how things work, why keep writing Dockerfiles and build images by hand, why not let these tools do it for us as a integral part of our development and deployment lifecycle? So in a pre-Docker world, the deployment artifact is an MSI or Web Deploy ZIP, in a Docker powered world, a additional artifact, a Docker image would be automatically generated as a byproduct. Please let me know if I am thinking about this incorrectly.
Both things will produce DockerFile either by hand or by those tools. Just things produced by tools will not be frequently good for real life. For example they will not use multi-staged builds, will not have HEALTHCHECK, will not be optimized for size etc. If you want to be good at containers you need to learn those things and you will not do that by using tools which hide those complexities.
VS code is better then VS for working with Docker for this specific reason, it helps you to craft DockerFile but it does not do it for you so you will know exactly what you put there and why.
Windows authentication has been enhanced in Windows 2019 specifically for GMSA accounts (in addition to bunch of other things)
You don’t need to go to microservices straight away. You do app modernization first, those 2 things have nothing to do with each other but you gain the same benefits as microservices running inside Docker (like immutability, savings on resources, OS as code etc)
I don’t know much about Oracle but since you unsure, I suggest install Windows 2019 in server core mode and record the steps which will require make your application run, once you have it running in server core you have 95% chance everything will run smoothly using the same steps inside container.