Docker Selfhosted Nanoclaw

TL;DR: When trying to run nanoclaw’s Dockerfile I can’t build the container because it fails to access deb.debian.org. What is failing?

Background: I’m trying to start a process for running my services locally. I moved to the docker space because of the lag time for Ollama to support the latest Qwen models. However Docker Model Runner can run them.

Currently: I’m trying to experiment with Nanoclaw and running it locally in a Docker Sandbox. I follow this post to install it in a Docker Sandbox (https://www.docker.com/blog/run-nanoclaw-in-docker-shell-sandboxes/) and once I get claude talking to my local model runner to run /setup It goes through the steps to install, but when it needs to build the container it fails because it can’t reach deb.debian.org

I tried to run the same Dockerfile manually within the shell and get the same error:

7.383   Could not connect to deb.debian.org:80 (151.101.130.132). - connect (111: Connection refused) Could not connect to deb.debian.org:80 (151.101.194.132). - connect (111: Connection refused) Could not connect to deb.debian.org:80 (151.101.2.132). - connect (111: Connection refused) Could not connect to deb.debian.org:80 (151.101.66.132). - connect (111: Connection refused)
7.387 Err:2 http://deb.debian.org/debian bookworm-updates InRelease
7.387   Unable to connect to deb.debian.org:80:
7.391 Err:3 http://deb.debian.org/debian-security bookworm-security InRelease
7.391   Unable to connect to deb.debian.org:80

I’ve had all combinations of issue trying to understand the proxy and allow and deny rules. I’ve had to add rules because Sandboxes can’t natively hit the docker model runner. So not sure if that. I added the domain to the allow-host list. docker sandbox network proxy nanoclaw --allow-host deb.debian.org It’s also not clear if this is additive or if I have to include the complete list every time I run this command.

I also see it in my logs under allowed request:
nanoclaw deb.debian.org:80 <default policy> 21:36:51 15-Mar 1

Finally, I run it outside of my sandbox and it builds successfully. So it’s clearly the sandbox setup.

Any thoughts?

I found another post which allowed me to look into the container-platform.log I see these seemingly offending lines:

{"component":"gvisor/forwarder","level":"info","msg":"151.101.130.132:80 <- 192.168.65.3:35602: dialing 151.101.130.132:80","time":"2026-03-16T08:58:24.178525400-04:00"}
{"component":"gvisor/forwarder","level":"info","msg":"151.101.130.132:80 <- 192.168.65.3:35602: dial failed: direct external connections are not allowed, all traffic must go through proxy at 127.0.0.1:59082","time":"2026-03-16T08:58:24.178525400-04:00"}
{"component":"gvisor/forwarder","level":"info","msg":"151.101.194.132:80 <- 192.168.65.3:37538: dialing 151.101.194.132:80","time":"2026-03-16T08:58:24.178525400-04:00"}
{"component":"gvisor/forwarder","level":"info","msg":"151.101.194.132:80 <- 192.168.65.3:37538: dial failed: direct external connections are not allowed, all traffic must go through proxy at 127.0.0.1:59082","time":"2026-03-16T08:58:24.178525400-04:00"}
{"component":"gvisor/forwarder","level":"info","msg":"151.101.2.132:80 <- 192.168.65.3:40004: dialing 151.101.2.132:80","time":"2026-03-16T08:58:24.179089800-04:00"}
{"component":"gvisor/forwarder","level":"info","msg":"151.101.2.132:80 <- 192.168.65.3:40004: dial failed: direct external connections are not allowed, all traffic must go through proxy at 127.0.0.1:59082","time":"2026-03-16T08:58:24.179089800-04:00"}
{"component":"gvisor/forwarder","level":"info","msg":"151.101.66.132:80 <- 192.168.65.3:60964: dialing 151.101.66.132:80","time":"2026-03-16T08:58:24.179603400-04:00"}
{"component":"gvisor/forwarder","level":"info","msg":"151.101.66.132:80 <- 192.168.65.3:60964: dial failed: direct external connections are not allowed, all traffic must go through proxy at 127.0.0.1:59082","time":"2026-03-16T08:58:24.179603400-04:00"}

Do you have a Docker daemon inside the sandbox? As far as I know only private IP ranges are blocked by default and the debian repos should be accessible, but I don’t know what happens when you run a docker daemon in a sanbox. The daemon creates additional networks so I can imagine routing the apt requests to the wrong endpoint, but I would have to try to understand it clearly.

You could try running

docker sandbox network log

It lists rules as well. I only tried changing network rules once and I had the same question, but I only defined the new rule and if I remember correctly, it was added. I dind’t have to define all the rules again

The rules are additive. You can check the resulting rule-set:

  • WIndows: %USERPROFILE%\.docker\sandboxes\vm\nanoclaw\proxy-config.json
  • Mac/Linux: ~/.docker/sandboxes/vm/nanoclaw/proxy-config.json

It’s worth mentioning that manual modifications to the file won’t be applied. Please use the docker sandbox network proxy command.

Afaik, all sandboxes are based on Ubuntu 25.x. Can you share the output of cat /etc/os-release from the sandbox?

Direct outgoing traffic is not allowed in the Sandbox. By default, the values for http_proxy and https_proxy should be configured properly.

1 Like

A domain that should be accessed from the sandbox must exist in the allowedDomains list.

If a container is started in a sandbox (e.g. a mcp server) it needs to declare the environment variables for the proxy as well. If the container tries to access http traffic, the domain must exist in the allowedDomains list. For https traffic the domain must exist in the bypassDomains list.

Isn’t that the case only when the default policy is changed from “allow”? There are some domains added to the allowlist by default and some to the blocked list, but by default domains that are not blocked are accessible like example.org or any other. I assume as long as the default policy is allow. If not, the sandbox still allows some domains like package managers.

EDIT:

I should test it because I started to think it could be different for each sandbox.

You are right, I forget that allow is the default setting.

Yes each of the sandboxes are intended so the agents can start and stop containers. That docker daemon is separate from the one on my computer. It’s isolated to the microVM and the containers started within the sandbox are within it.

My understanding is that all outbound requests go through the proxy between the microVM and the running computer. Therefore a request to deb.debian.org should check the proxy rules and get allowed. I did do --policy allow just in case. I can even curl deb.debian.com and get a response.

PRETTY_NAME="Ubuntu 25.10"
NAME="Ubuntu"
VERSION_ID="25.10"
VERSION="25.10 (Questing Quokka)"
VERSION_CODENAME=questing
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=questing
LOGO=ubuntu-logo

Thanks for the proxy-config.json location. I can confirm that deb.debian.org is in the allow list as well as a default policy of allow.


| My computer
| |----------
| | Sandbox (Isolated microVM)
| | |------------
| | | Docker ← Can’t reach update(s)

Essentially the docker container within the microVM can’t access the apt-get repositories which seems like fatal flaw unless I’m missing a step. I feel like that’s the first call in 90%+ of Dockerfile. I can successfully call that command within the sandbox itself however that fetches from archive.ubuntu.com. Maybe I’ll have to fork nanoclaw and update the image the Dockerfile pulls from.

I totally forgot that the agent container in the sandbox VM have access to docker daemon on its own host.

I tried to build an image now in a shell sandbox and got the same error. I will try (or try to try) again tomorrow.

I was confused why Debian mirrors appear, even though the sandbox itself is Ubuntu based. Now it makes sense, the problem is not with the sandbox, but with additional containers created in the sandbox.

Actually the sandbox consists of several elements:

  • the micro vm
  • a dedicated docker engine
  • a sandbox agent container (this is where you end up when you run a sandbox)

When you run a container in the sandbox agent container, it is a sibling container on the same dedicated docker engine.

Like I wrote earlier: if you use a container (or build an image), the http proxy variables must be set on those as well. Been there, tried it and it works. I wrote what I learned about it in my previous post.

A container inside sandbox micro vm must be started like this:

docker run \
  -e http_proxy \
  -e https_proxy \
  -e no_proxy \
  -e HTTP_PROXY \
  -e HTTPS_PROXY \
  -e NO_PROXY \
<other options> <image>

The same proxy settings must be passed to a build with --build-arg.

While at it, here are examples based on alpine in a shell sandbox:

Run a container:

agent@openclaw:openclaw$ docker run -ti \
>   -e http_proxy \
  -e https_proxy \
  -e no_proxy \
  -e HTTP_PROXY \
  -e HTTPS_PROXY \
  -e NO_PROXY \
> alpine
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
589002ba0eae: Pull complete
9e595aac14e0: Download complete
caa817ad3aea: Download complete
Digest: sha256:25109184c71bdad752c8312a8623239686a9a2071e8825f20acb8f2198c3f659
Status: Downloaded newer image for alpine:latest
/ # apk update
WARNING: updating and opening https://dl-cdn.alpinelinux.org/alpine/v3.23/main/x86_64/APKINDEX.tar.gz: TLS: server certificate not trusted
WARNING: updating and opening https://dl-cdn.alpinelinux.org/alpine/v3.23/community/x86_64/APKINDEX.tar.gz: TLS: server certificate not trusted
2 unavailable, 0 stale; 16 distinct packages available
### executed on the host: docker sandbox network proxy openclaw --bypass-host dl-cdn.alpinelinux.org:443
/ # apk update
v3.23.3-291-g38fc2a38498 [https://dl-cdn.alpinelinux.org/alpine/v3.23/main]
v3.23.3-309-g27bcd6245a0 [https://dl-cdn.alpinelinux.org/alpine/v3.23/community]
OK: 27575 distinct packages available
/ #

n.b.: the line ### executed on the host: docker sandbox network proxy openclaw --bypass-host dl-cdn.alpinelinux.org:443 indicates that I added a rules that bypasses the host. This is required to to https traffic.

Here is an example that shows how an image can be build in the sandbox:

DF=$(cat <<EOF
FROM alpine
RUN apk update
EOF)
echo "$DF" | docker build \
  --build-arg http_proxy \
  --build-arg https_proxy \
  --build-arg no_proxy \
  --build-arg HTTP_PROXY \
  --build-arg HTTPS_PROXY \
  --build-arg NO_PROXY \
  -t test - 

Thank you both for your feedback the flags did fix the problem.

The networking aspect is a little deeper then my knowledge runs and maybe a fix is coming when sandboxes are not tagged experimental. It seems like the docker daemon should route all traffic to the “internet” of the microVM which is through the sandbox’s proxy.

A few days ago a new script to set this up was created, I’ll have to investigate what this does. However it uses a whole different repo: GitHub - qwibitai/nanoclaw-docker-sandbox-windows: A lightweight alternative to OpenClaw that runs in containers for security. Connects to WhatsApp, Telegram, Slack, Discord, Gmail and other messaging apps,, has memory, scheduled jobs, and runs directly on Anthropic's Agents SDK · GitHub

If I wait a week these problems will be solved and nanoclaw will be old news :rofl:

Their sandbox specific script does exactly as you suggest

# Forward proxy env vars for sandbox builds
BUILD_ARGS=""
[ -n "${http_proxy:-}" ] && BUILD_ARGS="$BUILD_ARGS --build-arg http_proxy=$http_proxy"
[ -n "${https_proxy:-}" ] && BUILD_ARGS="$BUILD_ARGS --build-arg https_proxy=$https_proxy"
1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.