Hello,
I’m new to Docker and are currently evaluating it’s use in order to integrate it in our CI, doing automated integration test during builds.
I just set up a private registry but I also realized that when working with images we store on that private registry we always have to use the full image name containing the url + port number (when pulling but also when referencing a parent image with a Dockerfile FROM).
Is this really the only way to work or is there some configuration of docker to define the location of some named registry?
For me this is a choose stopper, I can’t imagine that I build 100s of Dockerfile that all depends on the URL + port of my registry.
What if at some point it moves?
Also it is impossible for us to use only Docker HUB, we are a company and some of the content of the images can’t simply be outside of our company.
I would imagine that naming of image would go something like my_registry_name/my_project_name/my_image_name then being able to configure docker to define the URL+port of my registry : my_registry_name -> my.server.mydomain:1234
I come from the Java world and would really appreciate something similar to the way maven works.
What are the possible workaround? Is Docker planning to change the way it works?
1 Like
Hello,
The thing that I do is run my registry on the standard port 443. That way, I can tag my images like this:
reg.example.com/foobar
If I ever want to change any part of the name of my registry (port number, DNS name, whatever), then I could retag all my images and repush:
docker pull reg.example.com:5000/foobar
docker tag reg.example.com:5000/foobar registry.example.com/foobar
docker push registry.example.com/foobar
At the end of the day, a private registry is a web service that needs to be referred to somehow. Part of setting one up will be planning to use a standard port and a DNS name that can be as permanent as you want your image names to be.
/Jeff
Yes and what about the 100s of Dockerfiles that contains URL of the old registry location? You seriously mean that the official way of doing this is then to go update those 100s of files before I can continue to use them? Just because of some server moving.
Yes the URL need to be referred somehow but it could be in a configuration of the docker deamon and not by including the URL in all images name.
This is a common issue that comes up on all sorts of software repositories. Say I want to run a gitlab server for all my organization’s git repos. I’ll have potentially thousands of references to the DNS name of my gitlab server referenced everywhere-- developer machines, CI builds, and documentation.
Similarly, if I want to run an artifactory server, pypi server, rubygems server, a local ubuntu package mirror, or even a public website I’ll run into the same problem.
The solution in all these cases is to get a DNS name, and always point that DNS name to the place where the service is being hosted. I have to change the backend deploy? no big deal, just repoint DNS. It’s really not so different when it comes to a self-hosted docker registry.
I am curious-- what sort of use case do you have that makes you worry that you will need to change the name of your docker registry? I know I’ve been a part of large organizations where it can be an arduous process to get a DNS entry set up.
Cheers!
Well I searched and find quite a lot of peoples who agree with me see : https://github.com/docker/docker/issues/6805
If I take your example of Artifactory indeed you will need to configure the URL of your Artifactory server at a lot of places but hopefully that URL will not be inside the artifacts, neither in the source code you store on Artifactory.
Moving Artifactory server won’t require you to rebuild every artifact because they are tied to the URL of the server (and that’s what you offer with docker currently). Neither you will need to change the source code because it contains the URL. No you will only need to re-configure the clients, because server location and artifact namings are separated.