Scalability usualy doesn’t come for free… All parts of the processing chain must be able to handle it properly.
Lets assume your application depends on the session state: 1st request ends in replica1, 2nd request ends up in replica2… how does the 2nd replica know about the session state of replica1? Hint: Akos made suggestions on how to externalize session state.
Lets ignore the session state and take a look on the generated files: 1st request ends in replica1 and creates a file, 2nd request end up in replica2 and neds to access the file created by replica1? How is this done? Lets asume you use a remote file share like nfs, what happens in severall replicas try to write the same file or read incoplete files ? Is there some file locking in place?
Docker does whatever a creater/maintainer of a Dockerfile uses to build the image. Everything needs to be fully automated: installation and configuration of dependencies and runtimes, integration and configuration of your own artifacts. Typicaly an entrypoint scripts need to be created, that leverages container environment variables to override configuration values in configuration files - at least for environment specific third party endpoints like database connections or other services needed from the applciation.
Make sure you properly understand what part of “scalling / load distribution” nginx actualy is responsible for and what parts it needs you to have implemented in the behavior of your application.