Docker Community Forums

Share and learn in the Docker community.

Getting an AccessDeniedException when trying to write to Docker Volume

Hi everyone,

I am pretty new to Docker and I have an issue that really puzzles me.

I am trying to run a container with persistent volume on Docker in marathon. The container persists a single .txt file in order to save its progress. When I deploy the container to test - it works just fine. The file is created if it doesn’t exist, gets read and overwritten successfully. When I deploy it to production, using the same configuration (same code, same config, same host), I get java.nio.AccessDeniedException on the first iteration - when it tries to create the file.

What I tried so far:

  • Run the container in privileged mode - no difference
  • Suspend the test application and re-deploy to production

Any help is much appreciated!

Dockerfile

WORKDIR /opt/klm
COPY target/auditlog.v1.batch.jar ./

RUN chown -R klm:klm /opt/klm

USER klm

CMD java -jar auditlog.v1.batch.jar

marathon.json

{
  "id": "${ID}",
  "cpus": 0.01,
  "mem": 512,
  "disk": 0,
  "instances": 1,
  "constraints": [
    [
      "hostname",
      "UNIQUE"
    ],
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [
      {
        "containerPath": "/srv/data",
        "hostPath": "/srv/data/service-auditbatch-${ENV}/srv/data",
        "mode": "RW"
      }
    ],
    "docker": {
      "image": "${IMAGE}",
      "network": "BRIDGE",
      "portMappings": [
        {
          "containerPort": 8080,
          "servicePort": 0
        }
      ]
    }
  },
  "env": {
    "SPRING_PROFILES_ACTIVE": "${ENV}"
  },
  "healthChecks": [
    {
      "gracePeriodSeconds": 30,
      "intervalSeconds": 30,
      "timeoutSeconds": 20,
      "maxConsecutiveFailures": 3,
      "portIndex": 0,
      "path": "/actuator/health",
      "protocol": "HTTP",
      "ignoreHttp1xx": false
    }
  ],
  "labels": {
    "env": "${ENV}"
  },
  "upgradeStrategy": {
    "minimumHealthCapacity": 0,
    "maximumOverCapacity": 0
  }
}

First, please, start with the provided example in the docs and make sure that docker-compose up works and that you can access elasticsearch with curl (curl -u elastic:changeme localhost:9200).

After that, you can change the named volume declarations to something like:

volumes:
  - $PWD/esdir1:/usr/share/elasticsearch/data

volumes:
  - $PWD/esdir2:/usr/share/elasticsearch/data

And create esdir1 and esdir2 in the directory where your docker-compose resides.
The only additional thing you need to do is chgrp 1000 esdir1 esdir2 and chmod g+rwx esdir1 esdir2.

Then docker-compose up should get you a working cluster with bind-mounted directories from your host (you can inspect the data under those new directories).

With this successfully setup, you can try integrate it to your elastic+kibana+other image docker-compose
example.