Docker Community Forums

Share and learn in the Docker community.

Containers for the default and mgmt VRF , it means can we run containers in both the vrf

if we have to run the containers in vrf we need to run the containerd in that vrf
however my requirement is that we might run containers in multiple vrf
in that case what is the best way to run containerd daemon ?

is there any solution for this ?

For what? what is vrf? Isn’t a containerd forum better suited?

ok will post in dockerEngine forum

VRF is virtual routing and forwarding , ability to have multiple routing table instances on single router
also it separate network namespace for each VRF instances

The way you present your case and the lack of a big picutre, I am confident to say: I would be surprised if someoing is going to reply …

Hope you will find a solution for whatever it is you are looing for.

  1. VRF device is created with an association to a FIB table.
    e.g, ip link add vrf-blue type vrf table 10
    ip link set dev vrf-blue up

  2. An l3mdev FIB rule directs lookups to the table associated with the device.
    A single l3mdev rule is sufficient for all VRFs. The VRF device adds the
    l3mdev rule for IPv4 and IPv6 when the first device is created with a
    default preference of 1000. Users may delete the rule if desired and add
    with a different priority or install per-VRF rules.

    Prior to the v4.8 kernel iif and oif rules are needed for each VRF device:
    ip ru add oif vrf-blue table 10
    ip ru add iif vrf-blue table 10

  3. Set the default route for the table (and hence default route for the VRF).
    ip route add table 10 unreachable default metric 4278198272

    This high metric value ensures that the default unreachable route can
    be overridden by a routing protocol suite. FRRouting interprets
    kernel metrics as a combined admin distance (upper byte) and priority
    (lower 3 bytes). Thus the above metric translates to [255/8192].

  4. Enslave L3 interfaces to a VRF device.
    ip link set dev eth1 master vrf-blue

    Local and connected routes for enslaved devices are automatically moved to
    the table associated with VRF device. Any additional routes depending on
    the enslaved device are dropped and will need to be reinserted to the VRF
    FIB table following the enslavement.

    The IPv6 sysctl option keep_addr_on_down can be enabled to keep IPv6 global
    addresses as VRF enslavement changes.
    sysctl -w net.ipv6.conf.all.keep_addr_on_down=1

  5. Additional VRF routes are added to associated table.
    ip route add table 10 …

Thanks Lewish for the Reply
I am not sure how to correlate your answer with my issue

let me create the complete picture of my requirement ,so that you can explain me

Generally some docker container to be reachable from external network via public interface and this interface are in management VRF
some other containers are running in default VRF ( internal ) and should NOT be externally reachable

flag ipv4/tcp_l3mdev_accept is disable so that packets received in a VRF context are only handled by a application bound to the VRF

few observation
containerd when launch in management VRF spins container in that VRF it means application inside it is reachable from external network

the container need to be spin in network host mode .

currently when I run containerd daemon in management VRF all the containers run in mgmt VRF .
now containerized application running in management VRF should be reachable from external network as public interface is in management network

we should also be able to run other the container in default vrf what is the way to achieve this .