Managed Volume Plugins for CIFS, GlusterFS and more

I just published my first two Docker managed plugins for cifs and glusterfs and generic

It’s my first multi-file go project so I am still learning the ropes and just using it as an excuse to learn Go. Perhaps some comments on how I did the coding, because I personally don’t like the way I did the polymorphism for the callback methods.

Next steps:

  • The way I wrote the plugin, I can probably tweak it a little to support anything that supports mount including nfs. However, I am thinking of making this a bit more general aside from having the specifics of gluster and cifs I am thinking of adding something that allows anyone to specify a package to install and the mount point information as settings and let the user decide what to do from there (e.g. use ceph, NFS, sshfs etc) without having to make specific versions.
  • I am actually trying to majorly rearchitect this into another cooky idea where instead of using the glusterfs client i would use the server instead. Conceptually the plugin will act as the gluster server, but talk with other plugins to form the cluster and use a device mounted to the node


I actually completed the cifs plugin now. Loosely based on the ContainerX approach and updated the next steps.

There’s a bug that appears to indicate that this plugin may not be compatible with 17.09 of Docker.

There’s a bug I found and already fixed in how I mounted the /root partition for the cifs-volume-plugin which allowed the plugin to write to the /root partition. The plugin code does not do any writes, but I fixed it so that it hardens it a bit more. I’ll release it as soon as I can get the systemd to work in a plugin

CentOS Managed Volume Plugin added. It supports NFS as a managed plugin.

Looks good, thank you for contribution. Why you want to add also NFS? The local driver already has NFS and NFS4 support (, so imho there is no need… now I’m wondering if the local driver also supports CIFS :grinning:

The nfs support seems to require IP addresses and does not work with host names. But then again I don’t have any use for it at the moment CIFS on the other hand I am using the netshare plugin but that’s still legacy plugin.

At first I tought as well that a hostname should be prefered. But if name resolution for some reason fails, what would then happen with the storage? In production we use the virtual ip of the metrocluster and we never had any issues…

1 Like

That’s good for static environments, but if you want something that can move to a development all the way to the production system it gets difficult. What I kind of want is something along the lines of a plugin that provides the resources at the same time but shared along the rest of the swarm. That’s the premise I had for the rearchitecting of the glusterfs driver I built so instead of client it would just be a “server”. Of course that would likely take a while for me to make since I am learning Go myself. By the time I finish k8s would’ve taken over swarm making this effort moot :slight_smile:

You are right, K8S is slowly taking over … What a pitty, I love Swarm for the simplicity it offers, I hope it can survive for small setups.

How do you mean “for static environments”? We use the NFS4 local driver trough the develop>release>integration>testing>production stages. Develop and release stages belong to one NFS Cluster and integration to production to the another, the real production storage.

I’m also learning golang right now, maybe you are interested in this project: Haproxy based swarm-router on CE edition I would really appreciate any feedback :wink:

“static” as in the IP addresses do not change from one environment to another. How did you ensure that the IP addresses are the same throughout? I mean without hard coding them at least. In my case I relied on the DHCP server provided IPs for my VMs which I can vagrant up and down. I enjoy working with swarm myself as well, k8s has a large number of followers, but I am not really a big fan myself. Should we drop docker swarm and move to Kubernetes?

I think for the NFS to work you need NFS installed on the host right? That’s actually what I wanted to avoid with the managed plugins so instead of having specific packages installed on the host outside the Docker ecosystem I wanted it as part of what Docker provides. I just added a task for myself to provide a stand alone NFS plugin now.