Docker network bridge not working on Yocto Hardknott

Hi All,

I have built a Yocto Hardknott image for the i.MX8 according to the following Variscite tutorial: Yocto Build Release | Variscite Wiki

Then, I added Docker by setting the following in local.conf:
IMAGE_INSTALL_append = " docker"
DISTRO_FEATURES_append = " virtualization"

The image builds fine, boots, and runs Docker (which takes a few minutes to start), however, the containers fail to connect to the docker0 bridge. For example, an Nginx container will not emit the expected boilerplate HTML via curl:

root@imx8qm-var-som:~# docker run -p 80:80 -d nginx
root@imx8qm-var-som:~# curl localhost
root@imx8qm-var-som:~# curl: (56) Recv failure: Connection reset by peer

This problem can be temporarily overcome by brctl:

root@imx8qm-var-som:~# brctl addif docker0 $(ifconfig | grep veth | cut -c 1-11)
root@imx8qm-var-som:~# curl localhost
html stuff that won’t render properly on the forum…

Specifically, Docker is not adding the veth interface to the docker0 bridge upon container creation/restart. I have tried modifying the Yocto build to replace NetworkManager with connman or nothing at all, but this doesn’t make a difference. In fact, when NetworkManager is running, “nmcli device status” shows docker0 as “connected (externally)”, which, to my knowledge, means that something other than NetworkManager is responsible for it.

I have repeated the test on a Dunfell version of the Variscite Yocto build, as well as a non-Variscite build of Yocto Hardknott for the Raspberry Pi 4, and Docker’s network bridge functions properly in both cases. However, swapping the meta-virtualization layer (which contains Docker itself among other things), meta-openembedded, or the Linux kernel from these into the Variscite Hardknott build doesn’t solve the problem. I had once considered the possibility that containerd was at fault, however, that is part of meta-virtualization, and would have been fixed by swapping that layer if it was the problem.

To recap, the problem appears to be specific to Variscite + Yocto Hardknott + Docker, and consists of a failure to automatically add the veth to docker0. Running the container with host networking does work, but is not suitable for my application. Any help would be greatly appreciated!

My undersanding so far: you try to run Docker on an arm single board computer, using a custom distribution for an os image that has beed created using the Yocto project.

As long as Yocto does not build a custom image for one of the officialy supported os’es, I would assume that chances are magnitutes higher to find same mindeds in a Yocto forum, than Yocto users in a docker forum.

You can check if the used kernel provides all required modules:

curl | bash

Thanks for the info! I ran that command, looks like everything is enabled except for “CONFIG_AUFS_FS”, “/dev/zfs”, “zfs command”, and “zpool command”. These are storage drivers, so I guess they are not relevant (probably only necessary if the image actually has ZFS or AUFS).

I will check the Yocto forums as well.

Indeed those are optional.

Those are just StorageDrivers for either AUFS (which can be considered superseeded by overlayfs2) and ZFS. They have not relation to the network layer where you experience problems.

Docker veth doesn’t connect to docker0 bridge is because veth match incorrect network setting. In my case is
You can check veth status by “networkctl status -a”.

$ networkctl status -a
● 8: veth23375aa
                     Link File: n/a
                  Network File: /lib/systemd/network/
                          Type: ether
                         State: degraded (configuring)
                        Driver: veth

I create a new network setting for veth to solve this problem.
file with content:

# /etc/systemd/network/


Need to restart the systemd-networkd service.

systemctl restart systemd-networkd.service


Hi John99chang,

I have since solved the veth problem, indeed systemd-networkd was running in the background and undoing Docker’s attempts to set up the veth, and simply removing systemd-networkd did the trick (my application doesn’t need it in the first place). Thanks anyway!