My host has 2 NICs.
en0 for admin subnet = 10.0.0.0
en1 for trunked VLAN subnets = 10.0.10.0 and 10.0.40.0
I have a music server namely “assetupnp” that was running on my WindowsPC but I am migrating to my Docker/Portainer server.
Now this is an UPnP-DLNA server for which all the clients are on VLAN10.
Is there a way for the clients to “discover” and “access” the server outside of using MACVLAN ?
Can one deploy the stack in hostmode and publish the ports on VLAN10 while the linux host is on VLAN1 ?
A friend in networking security keeps insisting that using MACVLAN and publishing a container on my addressable LAN is defeating the container purpose and poses a security risk but I would not know how to make this work.
Host mode means you are using the exact same interfaces and network settings as the host. You could have multiple interfaces on the host. One for each vlan, forward the dlna discovery port only from the ip of vlan10 and set a firewall to reject everything else except that port for incomming requests.
I wouldn’t use macvlan either. I think of containers as any process on the server which wouldn’t have its own public lan address. I just get a more secure, isolated environment for the container processes.
Correct me if I am wrong but I think I just got the part I was missing.
This is all new to me and those bridges : Proxmox < Debian < Docker
are a bit confusing.
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether ce:f9:a2:3a:b7:0e brd ff:ff:ff:ff:ff:ff
altname enp0s18
inet 10.0.0.17/24 brd 10.0.0.255 scope global dynamic ens18
valid_lft 5883sec preferred_lft 5883sec
inet6 fe80::ccf9:a2ff:fe3a:b70e/64 scope link
valid_lft forever preferred_lft forever
3: ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether de:6b:e5:7c:94:b3 brd ff:ff:ff:ff:ff:ff
altname enp0s19
inet 10.0.10.203/24 brd 10.0.10.255 scope global dynamic ens19
valid_lft 5504sec preferred_lft 5504sec
inet6 fe80::dc6b:e5ff:fe7c:94b3/64 scope link
valid_lft forever preferred_lft forever
4: ens20: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether e6:f0:38:94:ce:e6 brd ff:ff:ff:ff:ff:ff
altname enp0s20
The Debian host is itself on top of Proxmox
ens18 is the host interface on native subnet (or proxmox bridge)
ens19 is the VLAN10 interface (or Proxmox bridge)
ens20 is the VLAN40 interface (or Proxmox bridge), no IP yet since no container yet
So instead of using MACVLAN and give an 10.0.10.x IP address to each container acting as IOT servers. I should:
Create an IOT network for aforementionned containers
For each container, forward the necessary ports to ens19 IP
or
Keep this emby container in host mode
Put any other DLNA server container (assetupnp) in host mode as well
For each container, forward the necessary ports to ens19 IP
You should assign an IP address to the host on that interface (vlan) and yes, use that ip address for port forwarding.
I meant a simple docker network, nothing special. The port forwarding will do the trick so you can use a simple local docker network bridge.
You can’t have port forwarding when you are using a host network and you don’t need that. host network means you on’t have a separate network for the container. It will just listen on the host ip addresses. The only thing you need is to configure obs and everything else to listen on ly on the ip address of ens20 if you don’t want other clients on the other network to access emby server.
Afaik DLNA Clients use multicast packages within the same network to detect a DLNA server. Of course these packages wiil not cross networks (at least not without using a udp broadcast relay between the networks).
If your IOT Network is a macvlan network, the container ports are directly bound to the macvlan child interface of the container. There is no port publishing involved.
If you are not sure if port 1900 is the only dlna port that you need to open, you can try other ports from the forum I linked above or you can check the used ports with netstat in the container. If the container doesn’t have netstat, you can use an other container and use the network namespace of the dlna container. For example:
docker run --rm -it --net container:asset-upnp nicolaka/netshoot netstat -nat
Since I have never tried automatically discover dlna ports and @meyay answered what you need to know about macvlan, I think I am out of ideas.
I am not sure I fully understand the scope of this.
IOT was initially a macvlan network bound to ens19 interface. But as my friend was persuasive and @rimelek agrees on macvlan beeing against containerizing’s main goal and poses a security risk, I am “exploring” ways to avoid using MACVLAN.
IOT is now a bridge network.
When you say that multicast packages won’t cross subnets, do you mean that DLNA requests from clients on 10.0.10.0 won’t be acknlowleded from the server (container) if its on a bridge network in Docker?
If yes, then MACVLAN would be the only way.
Unless I run a host on each VLAN:
one Linux VM on native lan
one Linux VM on VLAN10, IOT
one Linux VM on VLAN40, surveillance
Just hop I did not understand correctly and there is still something I should try.
So basically I just think it is not necessary. I didn’t say it is a security risk. It could be since all pots would be accessible, although some people want to use macvlan so they can manage the traffic on their firewall. Since they wouldn’t create 30 ip addresses on the same machine to be able to set 30 applications to listen on those ip addresses, I wouldn’t use macVLAN, but I am not a security guy, so I rarely state anything with confidence about security without consulting someone who knows more about security. But yes, I can see that it “could be” a security risk, but I guess it depends on the infrastructure an the purpose of the containers.
I think @meyay was just reacting to your previous statment to understand the difference
I don’t want to react to the DLNA vs network part, because I have a guess, but I am sure @meyay would give you a better answer.
To my understanding the multicast message never reaches the DLNA server, due to the nature how multicast/broadcast work: they use special multicast/broadcast ip addresses. A short google search indicates that DLNA sends its discovery messages to 239.255.255.250 on port 1900/udp.
Though, you could use an udp broadcast relay to forward the messages to your DLNA container - I do not recommend using it with containers, as you would need to “fix” your container to a specific ip and subet AND you would need to configure which special ip, port and protocol DLNA uses for it’s detection.
In your situation, either macvlan or the host network should be fine.
I forget to mention, that I am with @rimelek when it comes to using macvlan. In general, I avoid it as well. In the last 8 years I have not seen one enterprise container environment where macvlan networks are used… I have only seen it being used in homelabs.
For non-vlan use-cases macvlan is not necessary, unless mutlicast/broadcast from outside a container network to a container comes to the table. Though, using host network would solve this issue as well.
For vlan use-cases you want to use ipvlan/macvlan, as this will guarantee outgoing traffic to use that particular ipvlan/macvlan interface. Container on bridge/overlay networks will use whatever gateway is configured on the host for a specific route or the host’s default gateway.
So, all this setup beeing for home use, I will leave the assetupnp server on the host network.
I will also ask the client’a app developper, namely Cambridge Audio, to add a feature to manually select the server instead of relying solely on DLNA discovery. Some other apps, like Embyserver for instance, do it. Much easier for us mere mortals.