After the above procedure, the Icon is red and I didn’t find a way to return it to “status normal”.
The VM and docker are still ok. docker run -it busybox bin/sh works just fine!
The only way to reset DD icon is to exit and start again.
Expected behavior
IMO there should be a “Status” command, that performs some checks eg:
VM status
docker version
create / start / stop / remove a dummy container
If the tests pass set Icon back to normal.
Information
Log:
[11:13:59.617][HyperV ][Info ] Hyper-V Windows feature is enabled
[11:13:59.620][PowerShell ][Info ] Run script with parameters: -Destroy True...
[11:14:01.607][HyperV ][Info ] Removing (potentially) existing mount authentication token
[11:14:02.033][Notifications ][Error ] Error: Failed to destroy: Failed to destroy: Fatal error: Failed to destroy VM "MobyLinuxVM" and switch "DockerNAT": Es wurde keine Integrationskomponente mit dem angegebenen Namen gefunden..
Message: Es wurde keine Integrationskomponente mit dem angegebenen Namen gefunden.
[11:14:02.102][PowerShell ][Info ] Run script...
Hm, I can’t reproduce the problem. When I delete the VM in hyper-v manager and start Docker anew, Docker re-creates the VM and things are fine. Resetting also works fine.
Hm, I know it’s not a particularly good solution, but could you try uninstalling, make sure the VM is gone (in Hyper-V manager), and then re-install beta-6.
We’re working on making the auto-upgrade smoother, sorry about the breakage.
Uninstall - Install didn’t solve the problem.
After uninstall I did have a look in the registry. … Seems to be all clean.
Are there any local settings? lock files? eg: sharing drive C:\ settings, which may brick the system?
Just to be sure: I do have 3 virtual switches.
external with connection to the internet.
dockerNAT, which docker desktop installed
internal switch … used with kitematic, because it’s the first one the system finds
this switch has shared internet access with external switch
There should be no “abnormal DHCP” settings. Everything is windows standard.
Kitematic and Docker Toolbox won’t work with Docker for Windows, so I recommend attempting to remove Docker for Windows, Docker Toolbox and all virtual switches and other stuff in hyper-v manager.
Hi Mario, I noticed this in the logs you kindly provided:
[21:19:38.125][Proxy ][Info ] 2016/04/06 21:19:38 listen udp 10.0.75.1:53: bind: Der Zugriff auf einen Socket war aufgrund der Zugriffsrechte des Sockets unzulässig.
With Beta 6 we are running a DNS proxy on the host which bind to port 53 on the VMSwitch IP address. Running this should enable roaming between different networks, for example.
We try to open the firewall setting for this port, which sometimes fails and then requires the user to allow access by acknowledging a pop-up windows provides.
The above seems to suggest binding to port 53 was blocked.
Do you have an idea why that might be the case?
[of course our code should handle this case more gracefully]
@rneugeba Thx for your info. I think I know now, what was going on. It’s kind of strange, but makes sense now.
I try to describe the steps involved, so you may be able to reproduce the problem:
I started with the following setting:
external switch (physical card … 10.x.x.x address space
internal switch … manually created for testing -> but not used … windows assigned 169.x.x.x address
-> now the internal switch gets 192.x.x.x address space
-> everything works fine.
-> all kitematic VMs shut down
Now docker desktop is started
-> DD has a problem now it doesn’t get an IP address
-> Hyper-V-Manager: Network status shows no ip
-> UDP bind fails. IMO windows needs it for the internal switch and blocks access
right click docker icon
Exit docker -----> Firewall rule pops up with the default settings.
com.docker.proxy.exe wants to modify …
I did use the default settings and allowed access
default is enable private network, disable public networks
-> firewall rules are set now and kind of hard for users to get rid of them And they seem to block us from now on
-> Removing the “internal switch” doesn’t solve the problem
-> Reset to factory settings doesn’t solve the problem
-> Removing the firewall rules get’s us going again \o/
I think there was an issue with the upgrade from earlier beta versions, and when you uninstall, certain artifacts (such as the VM, the switch and some reserved IP addresses) are left behind that prevent a clean reinstallation.
I managed to resolve the problem after a bit of head-scratching, and trial and error. Here are the steps I eventually used
Uninstall Docker for Windows
Hyper-V Manager: Delete MobyLinux VM and DockerNAT virtual switch
(I originally reinstalled DfW at this point, but hit problems with NetIP addresses already existing, so…)
PowerShell: Get-NetIpAddress to list pre-assigned IP addresses
PowerShell: Remove-NetIpAddress 10.0.75.1 and any others with DockerNAT in the description
You are probably right. … I did encounter the problems the first time I did the automatic update. … As I wrote, for me in the end the firewall rules where the problem.