Docker Community Forums

Share and learn in the Docker community.

How to redirect command output from docker container


(Ickata) #1

Hi folks,

What’s the best way of outputting command’s STDOUT/ERR to a file other than running the command such as

bash -c "node cluster.js >> /var/log/cluster/console.log 2>&1"

What I don’t like of the above is the fact that it results in 1 additional process, so finally I get 2 processes instead of 1, and my master cluster process is not the one with PID=1.

If I try

exec node cluster.js >> /var/log/cluster/console.log 2>&1

I get this error:

Error response from daemon: Cannot start container node: 
exec: "node cluster.js >> /var/log/cluster/console.log 2>&1": executable file not found in $PATH

I am starting my container via docker-compose:

version: '3'

services:
   node:
      image: custom
      build:
         context: .
         args:
            ENVIRONMENT: production
      restart: always
      volumes:
         - ./logs:/var/log/cluster
      command: bash -c "node cluster.js >> /var/log/cluster/console.log 2>&1"
      ports:
         - "443:443"
         - "80:80"

When I docker-compose exec node ps -fax | grep -v grep | grep node I get 1 extra process:

 1 ?        Ss     0:00 bash -c node cluster.js >> /srv/app/cluster/cluster.js
 5 ?        Sl     0:00 node cluster.js
15 ?        Sl     0:01  \_ /usr/local/bin/node /srv/app/cluster/cluster.js
20 ?        Sl     0:01  \_ /usr/local/bin/node /srv/app/cluster/cluster.js

As you can see, the bash -c starts 1 process which on the other hand forks the main node process. In docker container the process started by the command always has PID=1, that’s what I want the node process to be. But it will be 5, 6, etc.


(David Maze) #2

Do nothing, and use docker logs to review the logs. (This makes more sense in a cluster context like Kubernetes: in plain Docker it’s already hard to get at a file, in Kubernetes it’s almost impossible, but kubectl logs works great.)

If you want the logs to go somewhere else, manage that outside the container. You could set up a log forwarder on the Docker daemon, or docker run things with a shell redirect from scripts on the host system managed by your init system.

You can build your image with an entrypoint script like the following:

#!/bin/sh

# redirect stdout and stderr to files
exec >/log/stdout.log
exec 2>/log/stderr.log

# now run the requested CMD without forking a subprocess
exec "$@"

Getting the log files out of the container is left as an exercise for the reader.

(An entrypoint script that does some setup, then exec "$@" so that the main container process is still pid 1, is a pretty typical pattern.)


(Ickata) #3

Hi,

Thanks for the reply. I managed to solve the issue by creating a bash file that starts my node cluster with exec:

# start-cluster.sh
exec node cluster.js >> /var/log/cluster/console.log 2>&1

And in docker-compose file:

  # docker-compose.yml
  command: bash -c "./start-cluster.sh"

Starting the cluster with exec replaces the shell with node process and this way it has always PID=1 and my logs are output to file.