Prevent exposing ports

ISSUE

I’ve built a custom build of nginx container to test http3 on top of alpine:edge

Then tested it via docker run and forgotten to type --expose option

Unexpectedly any listening port was open to connections

I could reach 80 & 443 ports without exposing

I can’t see any exposes in build steps of alpine:edge


REPRODUCE

  • pull and run alpine:edge
  • Don’t expose any port
  • notice container ip via ip -a or
    docker inspect -f '{{range.NetworkSettings.Networks}} {{.MacAddress}} {{.IPAddress}} {{end}}' $(docker ps -aq);
    
  • request container from curl command or any browser
    response would be ‘network unreachable’
  • add nginx package
  • request again
    response would be from nginx this time

HOW TOs…

  1. Is there anyway to revert exposed ports (block them) from a pre-built image in similar way as router do when forwarding ports ?
    so service could run and attach to the port but it’s not accessable from outside container on this port for both bridge networks and macvlan

  2. Did anyone had this same behavior ?
    if no option to block, how could one overcome it ?

; regards

It doesn’t matter if the EXPOSE instruction was specified for during image build or not - it is for documentation purposes only and shows the maintainers intend. Some management tools pick it up and use it to prepopulate container ports in port mapping dialogs.

You can publish any container port on any available free host port as you want. The same way you can map any host path into a container path without declaring the container path as VOLUME.

You mean even without instructing docker build to expose any listener could do ?!!

Note : I edited the question

Can you please rephrase your question, as it is very confusing the way it is now.

Are you trying to write a Dockerfile?

Or are you just trying to run a container and want to alter its configuration after being created? If so, then you need to remove the container and create a new one specifying the new configuration settings. The only environment where I am aware of configuration changes being possible for existing containers is on a Synology NAS using their Docker-UI. The rest of the world has to remove and re-create containers.

Or are you referring to that the docker host is able to reach the containerized service by the container ip and port? This is normal; as the docker host has also an ip for the gateway in the container network - how else would traffic from the container network leave the container network?

“expose” is a little confusing keyword. It doesn’t expose anything actually. The “exposed” ports wil not be more visible than without the keyword. One could interpret the keyword as “exposing the fact that the container has processes listening on those ports”. The rest was explained by @meyay.

The only way to deny requests from the host on the container IP is configuring a local firewall or manipulating iptables manually. The other way would be using an “internal” network:

docker network create --internal internalnet

That is so internal that you can’t even forward a port to the container. Port forwarding is used to actually expose the ports to the outside world from other machines. It makes sense if you know that port forwarding is just forwarding requests from the host’s ip address to the container’s ip address.

docker build could never build an image that affects how you can access the ports of the containers unless the application inside the container can detect the remote ip and the app itself denies the request and doesn’t respond.

It is actually no different from how virtual machines work. You need a firewall to be able to block requests. There is no firewall or anything inside containers and different Linux distributions can have different default firewall rules on the host.

1st : Both

I was building a custom docker file
Then I tried to see if this behavior is from a mistake of my dockerfile instructions
but I found the same behavior

2nd : Yes I’m refering to that

With or without "-p" or "--expose" arguments or "EXPOSE" keyword,
I could reach to ports on any bridge type network if any service is there

For editing topic

I will try
I will create a diagram and attach it here it will be more informative
that would be more descriptive for what I want

What I mean by exposing just using docker "--expose" argument not using "-p"

I’m testing internally on the host before allowing access form outside
and this is the result in attachements

You could see browser accessing containered server without any of --expose or ‘-p’

I though your workaround could help me

I still have to show you the network model so also so you could go with me

The biggest problem here for me is the ter EXPOSE in docker arguments
If the container is exposed by default for every port has a service listining on it
Why should we have an --expose term in docker

  • This shoud be --block and act as block
  • or better : every port should be blocked by default and just exposed ports are accessable

If you mean the internal network, that was not a workaround. It is useful only if you don’t want to access the service from anywhere (but then you don’t even need network probably) or you have an other container on the same internal network that has a public network too.

Your screenshot shows that the network already existed. Without deleting the network you can’t create an other with the same name as internal.

Other networks will always be accessible from the host.

Ad again, EXPOSE or --expose does not do anything except sets a metadata. It will not allow any request. It never did. Only if some kind of firewall listens to the Docker API and does it for you. You used the container IP which doesn’t require any port forward.

Just imagine that you have two IP addresses on the same machine. Normally you can use any of those from the host even if you can’t access it from outside. The only difference is these ip addresses are not visible from otuside but visible from the host.

Ad again, EXPOSE or --expose does not do anything except sets a metadata. It will not allow any request. It never did. Only if some kind of firewall listens to the Docker API and does it for you. You used the container IP which doesn’t require any port forward.

Thats what I missed

Just imagine that you have two IP addresses on the same machine. Normally you can use any of those from the host even if you can’t access it from outside. The only difference is these ip addresses are not visible from otuside but visible from the host.

Accually I have (simple network modle is below)

@rimelek I created it within few seconds before with the same command
but using it I found I’m not able to access internet so I switched to my custom build which has nginx installed already

I issue here is the requests comming to the services not from it
I’m using expose for not making conflict between services runnig on the same port

Here is the network model

It lacks outgoing responses but it’s should demonistrate my issue

I can’t do what is in red text if the container has two ports

Now I have two option :

  1. Install a firewall (like you mention) on every service that could be exposed
    This means larger container, going outside “one container per service” & more latency
  2. Docker could have an option, plugin or workaround to get the same behavior
    (what I’m tring to find)

Macvlan is similar to bridge (so I’m treating it as brigde) except

  • it bypasses inter-host network to the outside
  • not accessable via host itself unless created virtually
  • Gives container a unique MAC and so router gives it the same IP upon restart and rebuild

I think if there is a docker configuration , option, plugin could do that on Bridged one showing on my browser, then I could do that for macvlan for security
& then I asked for the most basic thing that I could be missing : docker options

Let me know if I’m wrong

I didn’t want to tell al the story as it’s wired and I’m not soo good in description
but I hope it’s cleared now

By the way, I looked for a plugin on the internet for docker network firewall
but couldn’t find anything

$ docker network inspect in-net | grep Internal;
        "Internal": true,

I did created it as internal one

I haven’t read the other message yet, but it looks like I was very wrong. I assumed internal networks don’t allow incomming requests because I worked on a new demo two days ago and port forwarding didn’t work. I have to check that demo again.

So it looks like “internal” networks are not exceptions, you still need a firewall if you need that behavior. I will come back to read your previous post.

It is not an option. You need to configure the firewal on the host. Like UFW on Ubuntu.

You don’t need a plugin for that. I am trying to suggest from the beginning that you need a firewall software on the host. However, I am still not sure I understand your setup even after you shared your network model. I am a little confused but I think at least I start to understand what the problem is.

Bridged Docker networks are just local networks available only on the Docker host. MacVLAN (or IPVLAN) can give your containes an IP address on your LAN network so every service will be available by anyone on your LAN.

Based on your previous posts I guess (but I am not sure) that you want to run multiple containers on the Docker host and you are looking for a way to use the same ports from other machines which would not be possible with the bridged network unless you are running a reverse proxy in front of all the other containers.

That way you need to “publish” port 80 and port 443 on the reverse proxy (forward requests from the host ip address to the reverse proxy) and let the proxy to forward the requests to other containers based on the host name.

If I am right, then you are looking for Nginx Proxy or Traefik. I used Nginx Proxy for years but I finally switched to Traefik about a week ago.

This is actually the recommended way so you don’t need to MacVLAN and you don’t need to worry about firewalls since only the Docker host would be able to access the container’s IP addresses and you should not let anyone login to that machine that you don’t trust anyway.

Yes I’m doing so, but inside the container (the Public Services are a reverse proxy, an internal DNS, and some needed stuff on the router lever network )

Macvlan makes a virtual LAN attatching container directly to router so router assigns an internal ip (static or dynamic as configured) as any real device,

that means firewall on the host is not enough,
that should be configured inside every container with macvlan network
which I’m trying to avoid

host is fully isolated via ufw and only the container ip available to students
but the macvlan nature could allow attacker either directly or via infected device

I reached to the point of firewall with you, so now could docker provide it either natively or as a plugin or I must do that per container

Options is limited to my configuration, and I’m tring to find if docker itself could help m

This is done the only problem for me when I posted is to prevent services from being descovered by anyone trying to access unwanted service [ yes thats the firewall job ]

Reverse proxy is really set but just for responsing for tesing, dev, restricted and public services
firewall is needed on the network level or I must rebuild the network model for now :sweat_smile:

As far as I know, ip addresses on MacVLAN are not assigned by the router but Docker itself. You could for example have an IP address assigned by the router automatically to one host and Docker could assign the same IP address to a container. This is why it is recommended to have a dedicated vlan for containers only or at least an IP range for the router that it can assign to hosts and an other ip range for MacVLAN.

Quote:

If you need to exclude IP addresses from being used in the macvlan network, such as when a given IP address is already in use, use --aux-addresses

As I already wrote before that is not even an option. Firewalls are running on the host configuring IPTables rules for example like UFW does. A container is not a VM. It doesn’t have its own kernel and doesn’t have its own firewall. Searching for firewalls for containers you can read about an other type of firewall which is basically a reverse proxy.

I still haven’t seen anything that could be a reason for not using bridge network only and a reverse proxy. A properly configured reverse proxy would not allow all to access all containers, only the containers you want to make accessible.

I checked the new diagram. If an attacker gets into your study center, how do you tell if it is a student or an attacker? I think the new picture just confused me more which could be due to the lack of my knowledge, but for me it doesn’t show why you think you need macvlan instead of using only a reverse proxy. If you need a firewall for your LAN, that should work on your router. We started the topic with “exposing ports” and now we are talking about firewalls and attackers which is indeed an other level of security.

I try to summarize what I think about this issue, but after that I think I have to let someone else to help you with it as it seems either I failed to understand the actual issue or failed to explain why it shouldn’t be an issue by using bridge network. Maybe you would have a better luck on a forum where people can give you better security advice.

Note that I write everything that comes to my mind even if we discussed it already.

  1. Container ip addresses on bridged docker networks are always accessible from the host. It turned out it is true even in case of internal networks. Only port forwards doesn’t work in that case since the forward rules are not added to iptables and all packets are dropped unless they are coming from the same network, but the host is in the same network since it is the gateway.
  2. Nobody should have access to the host, so the previous facts shouldn’t be an issue.
  3. Even the docker documentation mentions that macvlan is manily for legacy applications that were migrated from VMs and not built for containers

    Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack

  4. Firewalls that can drop any network packet are running on the host, not in containers. At last I never heard of any way to run it “per container”
  5. Reverse proxies can listen on specific ports that you can forward (publish) so external requests can only go to the reverse proxy’s ports and the proxy can forward it to containers but only can, not always will. It depends on the configuration.
  6. The proxies that I linked before are listening on the Docker API so you don’t have to manually configure it. You only need to set some parameters to decide which container should be accessible from outside and which shouldn’t.
  7. Docker willnot read the network packets to reject requests that seems suspicious. An application built into a reverse proxy might be able to or a firewall on a router of course.
  8. A reverse proxy container should not contain anything else like DNS server or “some needed stuff”. If you need a local DNS server that should run in a different container.

I hope you will find a solution soon.

Update
One last question. What is “Trogan”? I haven’t found anything about that. Did you mean Trojan?

Host already has its own
After trying several firewalls I found no one kernal free, so none would run inside a linux container
Also WAF solutions seems limited
That means when I managed to hide the real ip, I was managing for a security concern
not a more private scope for myself :sob: :upside_down_face:

I wish I could write a firewall “kernal-independant”,
then I would write a network level docker plugin firewall

I think I will have a try with this at least but it seems an only per service solution

yes

I’ll try exploring docker plugin api
I may find a way to set host firewall in front of container beside trying WAF

Yes, I will set nginx container with -p and forward it encapsulated within host firewall for now

but that mean I can’t expose DNS ports to router, it won’t run with host dns on the same port and that mean any service «one project containers» that needs local dns would run only within host (all code will be refatored - and for now this is not avoidable)
I think no problem would happen

but I’m still thinking of how to get that option to docker without kernal inside container
I think the last thing I would try is writing a firewall connector plugin if it’s possible
I’d try that in free time
I know firewall programming isn’t easy
but wish I’ll be able to build it somehow :slight_smile:

Thanks for your time

Now I found another way to setup a firewall runnig inside a container using –cap-add option

docker run -it --rm --name=test --cap-add NET_ADMIN --security-opt no-new-privileges alpine:edge;

with that I’m able to run awall inside the container and even ping wasn’t possible after activating without a policy (that’s good for me - no net descovery)

The only thing to note is that I had to make sure sudo modprobe iptable_nat ip_tables; command runs on the host as the kernal here is the host kernal

I’m tring to think out of the box but of course I can’t know whether that is good or not
The happy thing is that I’m runnig firewalls from inside all my public containers now [ that should help avoid rebuilding network and keep container services separte in terms of network ip / connectivity / up / down, that’s what I think ]

The next step for me is to find a how to write docker plugins and whether they execute onto the host or container

  • if inside : plugin should replicate a firewall into every container
    (which seems not the behavior of plugins)
  • If it’s runnig from outside of containers :
    there should be a way to bind a separate clone of host firewall to the docker containers or whole docker network

But for now all informaion I have is generalized

  • Docker Plugin API | Docker Documentation
    From this I could tell that the extension / plugin should run on host not the container, so what I’m looking for is a firewall on top of host kernal that could be configured for guest container or virtual macvlan network separately from host and that mean if somebody would write it, it will be a new firewall from scratch (you only have kernal - write your own firewall that runs on every host - that’s my simple task :innocent: )
    Really I didn’t thought my question would bring me to such a big security field but I’m happy that you both let me know a key to my wanted behavior even if I don’t know what kind of security leaks that could open

@rimelek @meyay : For both of you thanks

And for anyone come across this topic these are helpful links

I am not sure if you are aware, but you can make a container hook into the network namespace of another container (as in use its network interface), so in theory the firewall could be in a separated container, controlling the network interface of the main container.

You might also want to consider to use Kubernetes, as it supports network polices out of the box (if the network plugin supports it). I guess migrating to Kubernetes is less of a headache than writing your own plugin :slight_smile:

1 Like

Agree for kubernates for larger projects, for such a small one I may consider minikube

Never known about hooks within docker network
But, having a quick search on that topic I found a similar thing depending on the host configuration

this is usefull for one time configuration
and if this is what you refer to then I’ll back to server to try and see how to accomplish that

Having a firewall container makes migerating between operating systems smoother

this is one more better idea :+1:

Actually, I used the terminology. It is as easy as setting a container to use the network of another container or service. Network-wise both container act as one entity (like k8s pods do).