How does one create a multi-architecture Docker Engine Plugin?

This questions is specifically about Docker Engine managed plugin system plugins, such as volume or network drivers.

As per the instructions in the link above:

A new plugin can be created by running docker plugin create <plugin-name> ./path/to/plugin/data where the plugin data contains a plugin configuration file config.json and a root filesystem in subdirectory rootfs.
After that the plugin <plugin-name> will show up in docker plugin ls . Plugins can be pushed to remote registries with docker plugin push <plugin-name> .

How, then, can the plugin be made for multiple architectures(amd64/arm64/others)? The root file system can have binaries for only one architecture after all.

Are Engine-managed plugins still a thing? I cannot find any documentation anywhere about how to make them multi-arch.

I guess you canā€™t do that, but docker plugins are still there. Iā€™m not sure how popular plugns are as Ialmost never used plugins and never made them. When I check plugins on Docker Hub, there is no information about the architecture even when I try a recent plugin likechttps://hub.docker.com/r/grafana/loki-docker-driver/tags

Plugins are not regular Docker containers. You can list the containers by running

ctr -n plugins.moby c ls

while Docker containers are in the moby namespace. After listing the containers, you can get information about it by running

ctr -n plugins.moby c info <containerid here>

When you list processes using with ps auxf, you can see the plugin container running emulated when the emulation support is installed.

I guess you could build multiple plugins like myname/myplugin-arm64 and myname/myplugin-amd64 and share commands in the documentation to automatically detect the right architecture for the image tag like

docker plugin install myname/myplugin-$(arch)
1 Like

Thank you for your answer. Everything I have found points to the same solution you suggested: creating different plugins for each architecture. Should be fun setting up a build environment for that.

Iā€™ll report back here with results.

So I built multiple plugins. The build process was made easy once the Docker Engine was set to use containerd snapshots instead of overlay2, as described here. I could use docker container create --platform to create a container for any supported architecture, and then docker container export to extract the root file system from which to create the plugin.

Thanks, @rimelek.

Thank you for reporting back. Itā€™s good to ā€œhearā€ you could solve it.

How did containerd make a difference? Wouldnā€™t docker container create and export work the same way? Also is there a reason why docker build --platform was not good to build the plugin, but docker container create --platform was? Using the --output option you could and multi-stage build you could copy the filesystem or a single file into the host.

For plugins, you need a flat root filesystem, like you get when you do docker container export, or docker buildx build --output type=local. In this case, I already had a complex build system in place which used docker container create followe by docker container export, and I wanted to make minimal changes.

I had to switch to the containerd image store before --platform with container create worked properly. Otherwise, it refused to create non-amd64 containers. I run Docker Engine (not Desktop) on Linux/amd64.

1 Like

I see. Thank you, for the explanation. Iā€™m still using the original image store, but the documentation indeed mentiones multi-platform images. Just in case it matters in the future, you can use the --platform option even without Docker Desktop and changing the image store if you install qemu-user-static and binfmt-support.

1 Like