How to use WSL2 by default?

I’ve installed WSL2 on my Windows Server 2022 instance, which in turn hosts my build server, Azure DevOps Server 2022. I’ve also installed the latest Docker for Ubuntu.

I’m trying to get my pipelines to use WSL2 for my test containers, as I’m limited to Linux with various images I’m using (in particular MS SQL Server).

Is there a way to configure Docker (EE, I think, since it’s Windows Server) to use the WSL2 instance by default?

I’m not sure what you mean by

There is Docker CE and Docker Desktop for Linux (or for Windows and macOS). Docker Desktop is not supported on Windows server and never was. If you need Docker CE, follow the official documentation from Microsoft:

https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=dockerce#windows-server-1

There is no Docker EE anymore. Only Mirantis Container Runtime.

Update:

I realized you want Linux containers, not Windows. If you want to install Docker in the Ubuntu WSL 2 distribution directly fo Linux containers, you just need to follow the other official guide for Docker CE

If you want to access the WSL2 distro from a CI/CD pipeline, that seems to be a Windows-related question and you could try a Microsoft forum. I don’t use Windows at that level, but I would guess that if you can access Windows, you can run wsl commands there. And you can also forward a port from your Windows server host to the WSL distribution to an SSH port in the Ubuntu WSL2 distribution making sure the firewall allows it.

Thank you for your clarification of the product names. That helps.

As far as my mention of the pipeline, that serves only as an example of a process that might need to access containers on WSL2. In fact, I’d like to have any process on the server interact with the containers on WSL2.

I’m trying to get the Docker CE installation to default to WSL2 connections, in similar fashion as Docker for Desktop provides the - SwitchDaemon feature.

In other words, instead of the docker info output containing this, which it does currently:

Operating System: Microsoft Windows Server Version 21H2 (OS Build 20348.2966)
OSType: windows

…I’d like it to contain this instead:

Operating System: Ubuntu 22.04.5 LTS
OSType: linux

The latter being from the WSL2 instance, of course.

Docker-ce has a feature called context, which allows the cli client to set the context to a specific Docker daemon.

If you run an ssh server inside the wsl2 distribution, you could add a context like this and switch to it:

docker context create ubuntu-wsl --docker "host=ssh://<username>@<ip-of-wsl-distro>"
docker context use ubuntu-wsl 

Of course, ubuntu-wsl is just a placeholder, and <ip-of-wsl-distro> could be localhost, if the sshd port of the wsl distro binds port 22 on the host (which it should do unless another host process already listens on the port).

Make sure to configure the ssh connection in $env:USERPROFILE/.ssh/config. Make sure to use the Windows ssh client to test if the configuration allows reaching the ssh daemon of the wsl instance. If the Windows ssh client is able to connect to it, docker context will be able to use this configuration when accessing the Docker Engine inside the WSL distribution.

Hm, it sounds as though that configuration might require some extra logic on the part of the connecting process.

What I’m after is the ability for the process to issue docker cli commands and interact with the WSL2 containers, just as if I were sitting manually at a Windows C:\> prompt on the server and running docker info.

Let’s say I RDP into my Windows Server desktop and fire up an elevated terminal window. I find myself at C:\Windows\System32>, and I enter the command docker info. I need that command to run within the WSL2 instance, not the CE instance. It should return info that contains these lines:

Operating System: Ubuntu 22.04.5 LTS
OSType: linux

The same with the processes (e.g. my build pipeline) that run Docker commands. In other words, I want to switch daemons, the same as what we can do with Docker Desktop.

Have I misunderstood your suggestion?

Probably, because @meyay suggested exactly what I think you need. Every docker request has an endpoint configured in a context. The SSH connection is just required for the remote Docker since it is remote if it is running in a virtual machine.
You can check the documentation for alternative ways like using a TCP socket instead of SSH + Unix domain socket.

And I emphasize “remote” again. Just because the client is on the Windows host, you can’t mount files from the Windows host using Windows path. Unless of course WSL2 converts it automatically, but I don’t think so.

Ah OK, I looked at it closer now. Thanks for the shin kick (and the docs link).

So it appears those docker context commands should be run in Docker CE, correct?

If this is what provides the docker cli on your host, then yes.

You could also execute the docker cli inside the distribution itself, which would not require you to register a context. This will also allow to bind paths from the Windows filesystem to container paths.

For instance, if you run a Gitlab Runner inside a WSL distribution, Gitlab CI would be able to execute commands directly inside the WSL distribution, the same way it would in a Linux VM.

@meyay

Well now, THAT sure was a trip 'round Robin Hood’s barn!

At first I was quite intrigued by your suggestion to look into Docker’s context feature, especially after @rimelek helped me understand exactly what it was you were getting at.

So I dove into the docs and set out to learn what I needed to in order to set everything up (list, switch, remove, etc.). My command line tests revealed a configuration success, as I was able to issue docker info at my Windows Server console and receive Linux output.

And for a fleeting moment I thought I had the answer to my dilemma. I thought that’s what I was looking for.

Not so fast. After running my pipeline again, getting yet another failure, and following the bread crumbs of the source code from the NuGet package I’m using (Testcontainers), I found that they’re using the Docker API, which, as it turns out, doesn’t respect the context.

With that, then, I figured I’d just point my Testcontainers host (that’s a feature they provide) to the SSH endpoint that I use when connecting to WSL2. But before I could automate that from a pipeline I had to study up on how to enable password-free authentication in SSH (i.e. key pair). OK, now… got that set up? Yep. Check. Let’s go.

Uh-oh… Docker doesn’t know what an SSH endpoint is.

So then I had to (learn how to) enable a TCP endpoint instead. And that was a certifiable pain-in-the-foot, because it turned out that my Docker installation somehow had been done with snap instead of systemd, and all the advice I was finding was for systemd. So I never did get it listening under snap. After much futzing and failing, I uninstalled and reinstalled under systemd, reconfigured using these simple steps, and then restarted the service. Finally I had a listener.

I reran my pipeline, this time pointing to the TCP endpoint instead (actually, I ended up setting the DOCKER_HOST environment variable for ease of use), and voila! Success! Houston, we are a go.

Through all this I’ve been able to determine that I don’t need to use WSL2 at all—in fact that just complicates matters, since it’s not reachable from other machines on the network. My working setup ended up being a plain-vanilla Ubuntu Server instance, running in its own VM.

Is it all worth three full days of head-pounding trial and error? (Which, by the way, included a trip down the rabbit trail of switching to Windows containers on my dev box and learning how to build my own MS SQL Server image/container, only to discover that Testcontainers doesn’t support SQL Server in a Windows container.)

I’ll go out on a limb here and convince myself that yes, it is worth it, because the ability to run my tests in containers is invaluable to me. I’ve wanted to do this for quite some time, and I hesitated because I knew it was going to be a baptism by fire. I was right, but now it’s over with. I’ve got a handle on this.

Thanks for your help, both of you. I appreciate it.

I did consider that, and I looked into it, but it turns out that it’d introduce extra (unwanted) complexity into my pipeline definitions. And it’d bring with it extra licensing costs for another Build Agent (Azure DevOps terminology) on my server.

But I’ll keep this one on the back burner for the day that there’s no other option.

This is the way to go.

Possible alternative:
If you use AKS, you could run your agent in kubernetes and use kubedock to provide a docker api. It translates the docker api calls to kuberentes api calls. Works like a charm with testcontainers executed in pipeline jobs, if kubedock is a service container of the job.

1 Like

My head is still fogged from that episode. I’m afraid you’ve lost me in a sea of jargon :wink:

But that’s OK. I’ll survive. Somehow.

AKS = Azure Kubernetes Service.

The kubedock approach only makes sense if you already use AKS to run kubernetes payload. It doesn’t make sense to introduce AKS to your infrastructure zoo, just to use kubedock.

Oh. I touch Azure as little as I can get away with.

Not out of principle or policy, mind you… that’s just the way it’s worked out.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.