Resurrected files in Swarm?

I’m running Swarm on Ubuntu 22.04.2. I deployed a service using a docker compose file. The service has two bind mounts: a directory and a file.

I removed the service and deleted the locally created files via the bind mount. When I redeploy the service, it has a reference to the deleted files. When I look in the local directory where the files I deleted were, they are back.

My online search turned up an idea about inodes but I think those are different issues. However, inodes sounds like it could be involved.

Has anyone seen this issue? Could it be an inode issue or something else?

I’m completely dumbfounded, at the moment.

Nope, never unless the containers entrypoint script actually creates the files while starting the container or the application inside the container does it itself.

What I node issue are you referring to? A corruption of the journal that holds the mapping between directory entries and inodes? There should be a fsck.{filesystem} for your file system: run it and see if something thing is off.

I do execute a command: command: "bash -c ...."

Since I have this, it’ll “bring back” the old file? I changed some environment variables which should make the contents of the file different from before. However, it’s the same file as before.

Can you explain what’s happening or point me to some information on this?

uhm you shared nothing to work with… Help me to help you: unless I understand what you do and how you do it, everything would be a wild guess on my side at best.

Sorry, hopefully this helps.

The file in question is created in /tmp/logs when update_run.sh is executed.

docker-compose.yml

version: '3.7'

services:

  kafka01:
    image: confluentinc/cp-kafka:7.3.0
    hostname: kafka01
    ports:
      - "19092:19092"
      - "19093:19093"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT'
      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://kafka01:9092,PLAINTEXT_HOST://<ip_address_of_host>:19092'
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_NODE_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@kafka01:19093'
      KAFKA_LISTENERS: 'PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:19092,CONTROLLER://0.0.0.0:19093'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LOG_DIRS: '/tmp/logs'
    deploy:
      replicas: 1
      placement:
        constraints: [node.labels.target == 1]
    volumes:
      - /home/base/logs:/tmp/logs
      - /home/base/data/update_run.sh:/tmp/update_run.sh
    command: "bash -c '/tmp/update_run.sh && /etc/confluent/docker/run'"

command to deploy: docker stack deploy my_stack -c docker-compose.yml

So if you remove the stack and clean /home/base/logs on all nodes that fulfill the constraint node.labels.target == 1 and then deploy the stack again, the file is recreated, even though neither /tmp/update_run.sh, nor /etc/confluent/docker/run (before it starts kafka) itself creates the files in the folder?

Are you aware that if the image has an entrypoint script declared as ENTRYPOINT in the Dockerfile used to create it, what you declared as command actually becomes arguments to that entrypoint script?

You might want to try this to make sure there is no entrypoint script interfering with your solution:

    ...
    entrypoint: ["bash","-c"]
    command: '/tmp/update_run.sh && /etc/confluent/docker/run'

Didn’t work for me.

Also, I think I found the image in /var/lib/docker/images/overlay2/imagedb. It didn’t have an entrypoint defined, if I’m reading it correctly: "Entrypoint":null.

Meaning what exactly?

I got the same results as before.

I think the issue has been resolved. It was an environment configuration issue.