Issue: Unable to Pull Model Manifest from Ollama Registry
Description:
When attempting to run the command ollama run llama2 on the Raspberry Pi, an error occurred during the process of pulling the model manifest from the Ollama registry. The error message indicates a timeout while attempting to establish a TLS handshake with the Ollama server.
Steps to Reproduce:
Run the command ollama run llama2 on the Raspberry Pi.
Expected Behavior:
The command should pull the model manifest from the Ollama registry successfully, allowing the specified model (llama2) to be executed.
Actual Behavior:
The command failed with the following error:
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": net/http: TLS handshake timeout
Troubleshooting Steps Taken:
Checked the network connection on the Raspberry Pi by pinging external servers.
Verified firewall settings to ensure outgoing connections to the Ollama server on port 443 are allowed.
Attempted the command again after some time to rule out temporary issues.
How did you installed Docker? Can you share a link to the followed instructions?
The following commands can give us some idea and recognize incorrectly installed Docker:
docker info
docker version
Review the output before sharing and remove confidential data if any appears (public IP for example)
Mine was a DNS issue. I ran the docker command that @rimelek posted and it returned could not resolve host. I changed the nameserver listed in /etc/resolv.conf to my local DNS server IP. Now I’m able to pull models again without issues.
BTW, I’m running Ollama on Docker in WSL. Hope this helps.