`# Use an official Python runtime as a base image
FROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]`
There is no extension on the Dockerfile right?
I ran the commands in exact sequence given in the tutorial. I went back to do it again. When I make changes do I need to bump the tag an push every time? Or can I just make changes and only create a new tag when I want to? app.py is exact copy of the tutorial code.
When I try docker swarm init after restarting I get
Error response from daemon: This node is already part of a swarm. Use "docker swarm l join another one.
Error response from daemon: This node is already part of a swarm. Use "docker swarm l join another one.
That happens when you run docker swarm init more than once. If you need to restart the tutorial, you must leave the initialized swarm with:
$ docker swarm leave --force
And then you may run docker swarm init again. You should then get the following output:
$ docker swarm init
Swarm initialized: current node ({node_name}) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token {token} \
{ip}:{port}
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Still the same response. I only ever get the same container ID it never load balances. @denisroy. Fallowing the tutorial step by step. It shows I have 5 instances if I use docker stack ps getstartedlab using the browser or curl will only ever show the same string.
Im also having problems. It was working at part 2, but the part 3 which introduces docker swarm doesnt work. My docker-compose.yml and Dockerfile are the same as the tutorial. Im getting the usual connected refused at localhost:4000 and localhost:80.
I am using ubuntu 16.04. I have apache installed so it was using port 80, but i put down the apache service and the swarm still doenst work. The services are up though, because using comands like docker container ls, docker service ls, and docker inspect respond just like the tutorial. So i think its a port mapping issue.
I ran in the same issue, I didn’t investigate too much yet but I was getting nothing out of localhost (hitting port 80).
So, if you want to progress in the tutorial, go in docker-compose.yml and change:
“80:80"
by
"4000:80”
Then hit localhost:4000
I’ll update if I find something but I believe my port 80 might be busy handling something else
I’ve had the exact same issue with the latest version of the tutorial at part 4, “Accessing your Swarm”.
I had a clean run on the tutorial (no extra images or containers, correct port configuration on docker-compose), I tried restarting Docker and no love. What do I do?
My files are quoted here, default from the tutorial.
App.py
from flask import Flask
from redis import Redis, RedisError
import os
import socket #Connect to Redis
redis = Redis(host=“redis”, db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(name) @app.route(“/”)
def hello():
try:
visits = redis.incr(“counter”)
except RedisError:
visits = “cannot connect to Redis, counter disabled”
html = “
Hello {name}!
”
“Hostname: {hostname} ”
“Visits: {visits}”
return html.format(name=os.getenv(“NAME”, “world”), hostname=socket.gethostname(), visits=visits)
if name == “main”:
app.run(host=‘0.0.0.0’, port=80)
docker-compose.yml
version: “3”
services:
web:
# replace username/repo:tag with your name and image details
image: alacariere/get-started:part2
deploy:
replicas: 3
resources:
limits:
cpus: “0.1”
memory: 50M
restart_policy:
condition: on-failure
ports:
- “4000:80”
networks:
- webnet
networks:
webnet:
Dockerfile
Use an official Python runtime as a parent image
FROM python:2.7-slim
Set the working directory to /app
WORKDIR /app
Copy the current directory contents into the container at /app
COPY . /app
Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
Make port 80 available to the world outside this container
EXPOSE 80
Define environment variable
ENV NAME World
Run app.py when the container launches
CMD [“python”, “app.py”]
Terminal Response from creating VM’s until swarm created
Machine Info
I’m on a mid 2015 15inch Mac running High Sierra. Help is appreciated. Also, please note that the blockquotes wreck all of the spacing and comments from the files, but as I am not able to attach them, you have to just trust that I literally copied and pasted from the tutorial.
Ran into the same issue for Part 4. I tried on both Win10 and Ubuntu 16.04. Why have nobody given out the solution for half a year? It is a just easy tutorial.
Ok, I found a solution. Rather than going to any of the IPs coming from docker-machine or docker swarm, I just ran ifconfig on my host machine and found a vboxnet[number] that I guessed corresponded to the virtualbox machine that docker-machine was running. curling to this IP address (versus localhost or any of the other ones) was successful.
I don’t get any errors in the console, but the URL simply does not work (generic failed to connect errors). Used curl and different browsers. I’ve tried a variety of ports, including all of the default values, and nothing works.
Hi, I have the same issue.
The problem why you get the same container ID, is in my humble opinion, that you are reaching the only one docker instance running, not the load-balancer. If you stop that container you will get no page at all.
In my case the problem with the swarm was that the instances were starting and stopping. So I get no answer at all. Everything was working fine but the answer to
docker service ps getstartedlab_web
was the following
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
xikxn3kazcpk getstartedlab_web.1 <username>/get-started:part2 <machine-name> **Shutdown** **Rejected about a minute ago "mkdir /var/lib/docker: read-o…"**
pqenkm4dx74d getstartedlab_web.2 <username>/get-started:part2 <machine-name> Shutdown Rejected about a minute ago "mkdir /var/lib/docker: read-o…"
ky5a7pbxoofs getstartedlab_web.3 <username>/get-started:part2 <machine-name> Shutdown Rejected about a minute ago "mkdir /var/lib/docker: read-o…"
mxcx9gvfgwqr getstartedlab_web.4 <username>/get-started:part2 <machine-name> Shutdown Rejected about a minute ago "mkdir /var/lib/docker: read-o…"
thtdwy3i405h getstartedlab_web.5 <username>/get-started:part2 <machine-name> Shutdown Rejected about a minute ago "mkdir /var/lib/docker: read-o…"
The problem was that a directory could not be written.
I found this was my real problem Link to stackoverflow solution. ----> I have installed docker via snap in ubuntu 18.04 and there was a problem with that.
I removed it and install via apt and it worked fine. This link could help to get things working.
While troubleshooting this issue … I got a flashback that I used --advertise-addr while creating swarm on my mac. I deleted swarm and recreated without --advertise-addr option and then I was able to access service on swarm using localhost.
I am yet to dig deeper to get to the technicality of how this option is becoming an issue.