Synology NAS docker phpmyadmin (HTTP:port OK, HTTP ?, HTTPS ?)

i will try repeat / rephrasing

  • I wanted with synology NAS move from
    (inital state = no accessed worpress database in docker)
    to
    (targte = secured access worpress database in docker via phpmyadmin)
  • with preference that with and without docker access via phpmyadmin works the same way (YAML + revproxy does not achived it but its prbably the best solution for now)

FYI I just tested: keep phpmyadmn outside docker and access docker database as “remote databse” even is local with no contanerised phpmyadmin - and it works, what i do not like is pre modification of config.inc.php and/or synology_server_choice.json … it will be good to have automated process to update the list of databases (regular + docked) at synology side - in this case i can skip painfull docker cfg of actual set (wp+database), simple YAML addon cfg, no revproxy etc

NEXT: at this moment I would like to know how to modify docker “wordpress \ wp-config.php”
my first test shows that “vi” cmd is not in docker (more confortable editing then vi would be better)

It was not a suggestion. It was an example of the syntax. You need to choose the available port. If anything listens on all available IPs on the host, loopback IPs will not be usable with that port either.

It seems you missed my request:

Please, check again and use code blocks instead of codes so you don’t need to break URLs in the examples.

Docker is the daemon basically. The daemon has no vi in it, that’s true and containers often has no text editors either, because it is not for editing files interactively, but for running processes. If you want to edit something, you edit a mounted config file or a file in a mounted folder. Or you can install an editor in a container temporarily, but since the config file must be mounted unless you want to lose it, it is beter to edit the file from where it is mounted. Or you have to include it in the image.

But this topic was about HTTP ports, I don’t recommend discussing other questions in details here as that would make it even harder to follow for other users.

regarding yaml cfg in docker section ‘database’ … to be used by non docker phmyadmin at IP 127.1.0.0 in simple way without port (3306 is implici port for database MariaDB)

  ports:
  - 127.1.0.0:3306:80

question1: docker obviously allows export IP 127.1.0.0 but not the ‘name’ e.g ‘data1.local’ ? name is better then IP - right?

question2: why is suggested install phpmyadmin into docker? the only target for users is apps (wordpress+database) not (wordpress+database+phpmyadmin). i am just curious why to increase docker img size, occupy more resources by adding phpmyadmin into docker when its not needed? did i missed something ?

very good advice (example if you like) would be simple way how to export all docker databases over all dockers to the non docker MariaDB space so these docker databases will be visible in the non docker phpmyadmin (preferably below non docker databases) … thanks for help if you knwo how to do it - this would be the best solution to use phpmyadmin for modify docker databases required by docker wordpress

// “vi” cmd is not in docker
I used next way to use vi, modify wp-config.php, or other files, (stop docker, start again just as test) and the results is that modificaion was in docker files permamnent - not temporarly made - so its similar like databse changes in the docker (modified by phpmyadmin or wordpress)

 sudo docker ps                 //> get <container>
 sudo docker exec -it <container> bash
 apt-get update && apt-get install -y vim        //install vim from debian at synology nas seems ok
 vi wp-config.php //modify and write by ":wq"

BTW commands like ls -l, install and others works … vi not … daemon does basically some selection to execute some cmds but some not ?

Docker is for isolation. If you want to connect from host to an app in a container, you need to publish a port from the container. If you want to do that with multiple containers, you need to use different ports.

If you have multiple databases in Docker containers and want to connect via an admin tool, the easiest way is to place the admin tool in a shared Docker network. Then the admin tool can publish a single port. Within the Docker network, the admin tool can reach the databases by service/container name, no worry about ports.

Example with Postgres and Adminer:

services:
  postgres1:
    image: postgres:17.4
    restart: unless-stopped
    networks:
      - postgres
    environment:
      - POSTGRES_PASSWORD
    volumes:
      - ./data:/var/lib/postgresql/data

  adminer:
    image: adminer
    restart: unless-stopped
    ports:
      - 8080:8080
    networks:
      - postgres
    environment:
      - ADMINER_DEFAULT_SERVER=postgres1

networks:
  postgres:
    name: postgres
    attachable: true

Containers are not made to install tools inside, they are not a VM. Best practice is to create your own image (doc) and place the modified files inside. Or use a bind mount to place the file from host inside the container during runtime (doc).

question: the second instance from this example also contains postgressX and Adminer … like below ?

Can you please state what you want to achieve?

Very basic questions:

  • You want multiple databases instances?
  • You want multiple admin tools?
  • You want database externally reachable?
  • You want admin tool externally reachable?
  • You want multiple databases instances? NO, just one database for multiple WP projects - (i am not expert for DB so) IF non docker mariadb does for multiple WPs multiple non docker DBs ‘files’ then YES - for containerized / docker DBs is YES anyway due to isolation

  • You want multiple admin tools? NO, phpmyadmin is sufficient

  • You want database externally reachable? NO/YES, WP use local access, phpmyadmin management could have external access - but revproxy means tecnically also NO for external access

  • You want admin tool externally reachable? YES - https access via phpmyadmin to the mariadb database - it was the very beggining request to have secured access to the local database - but revproxy means tecnically also NO for external access

  • what i really want to have non docker phpmyadmin which connect to local databases including docker databases when they exist in docker - so thats why i asked you about example for 2 or more instances to understand what the examle do (at synology NAS i do not have postgresql or adminer) - so I assume that your suggestion is next mapping

in my casa and after short experience with docker and perfomance decreasing and setup complication I would like to use

  • isolated WP for sure, and so if i follow what you wrote then lets try
  • isolated DB (see case 2), and probably more performing /later/ non isolated DB (case 3) but I i really want to use
  • non isolated PMA as management tool for sure - i really think that wasting resource by puting managemtn tool into container is suggested way but not so smart

This would be my simple approach, keep everything in containers for simpler upgrade and worst-case downgrade options.

  • Create proxy, wordpress, database and admin in Docker containers
  • Files are placed on host, using bind-mounts, for easier backup
  • Database will create root user with random password (check logs)
  • Use database tool to create wordpress1… user, enable option to create same name database
  • Access wordpress1… and run setup

Important: If you connect the proxy to the Internet, the admin tool and the connected databases can be accessed from the Internet. So make sure to use secure passwords.

# compose.yml
services:

  wordpress1:
    container_name: wordpress1
    image: wordpress:latest
    restart: always
    environment:
      - WORDPRESS_DB_HOST=db
      - WORDPRESS_DB_NAME=wordpress1
      - WORDPRESS_DB_USER=wordpress1
      - WORDPRESS_DB_PASSWORD=wordpress1
      - VIRTUAL_HOST=wordpress1.example.com
      - LETSENCRYPT_HOST=wordpress1.example.com
    volumes:
      - /host/wordpress1:/var/www/html

  wordpress2:
    container_name: wordpress2
    image: wordpress:latest
    restart: always
    environment:
      - WORDPRESS_DB_HOST=db
      - WORDPRESS_DB_NAME=wordpress2
      - WORDPRESS_DB_USER=wordpress2
      - WORDPRESS_DB_PASSWORD=wordpress2
      - VIRTUAL_HOST=wordpress2.example.com
      - LETSENCRYPT_HOST=wordpress2.example.com
    volumes:
      - /host/wordpress2:/var/www/html

  phpmyadmin:
    container_name: phpmyadmin
    image: phpmyadmin:latest
    restart: always
    environment:
      - PMA_HOST=db
      - VIRTUAL_HOST=admin.example.com
      - LETSENCRYPT_HOST=admin.example.com

  db:
    container_name: db
    image: mysql:8.0
    restart: always
    environment:
      #- MYSQL_DATABASE=superuser
      #- MYSQL_USER=superuser
      #- MYSQL_PASSWORD=superuser
      #- MYSQL_ROOT_PASSWORD=superuser
      - MYSQL_RANDOM_ROOT_PASSWORD=yes
    volumes:
      - /host/db:/var/lib/mysql

  nginx-proxy:
    container_name: nginx-proxy
    image: nginxproxy/nginx-proxy
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /host/certs:/etc/nginx/certs
      - /host/html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock

  nginx-proxy-acme:
    container_name: nginx-proxy-acme
    image: nginxproxy/acme-companion
    restart: always
    volumes_from:
      - nginx-proxy
    volumes:
      - /host/acme:/etc/acme.sh
      - /var/run/docker.sock:/var/run/docker.sock

If you really want to connect from a Docker container to a host application, try using extra_hosts:

services:

  wordpress1:
    container_name: wordpress1
    image: wordpress:latest
    restart: always
    environment:
      - WORDPRESS_DB_HOST=host.docker.internal
      - WORDPRESS_DB_NAME=wordpress1
      - WORDPRESS_DB_USER=wordpress1
      - WORDPRESS_DB_PASSWORD=wordpress1
    volumes:
      - /host/wordpress1:/var/www/html
    extra_hosts:
      - "host.docker.internal:host-gateway"

do you suggest this ?

I have right now this topologay BUT as non docker - i tried use docker mainly to isolate WP behavior -so i would rather use case 3 (from my last post)

now imagine i still use normal setup without docker with some isolated WPs and so when i open PMA i would like to see all databases in PMA (normal DBs and very seldom also with docker DBs) => case 3 so I have:

  • isolated the taget WP and accept lower performace
  • non isolated DBs works at 100% performace (not decrease performace by 50% by docker)
  • non docker PMA shows all databases as all DBs are non docker
    … how looks YAML then ?

This seems to work, as long as your database on host listens to all interfaces (0.0.0.0) or includes the Docker gateway IP on host. Tested on Debian Linux with Docker CE.

Have you tested this? Docker uses containers, not VMs and no hardware emulation.

I will try - so far it does not work as solution at my platform - IF i understand well - this allows access to nondocker DB from docker WP .. right ?

i would say that isolation daemon layer is always slow down everything inside when need anythyng from outside but my observation is that inside to inside has also not so good performace so this was the turning point why i do not want to use DBs and phpmyadmin inside (consumes resources and its not needed) - in my case its subejectivelly ~ 50% slow down for all in one solution - after some tests i would say that DBs and PHP exacution was very ‘unsatisfied’ and i would be happy with 50% slow down - but i am not goin to measure it .. i do not have yet the docker solution i would consider to use - maybe then i will try performace - which is not my target - target is to have multiple isolated independent WPs - to have full controll about it

try gsearch keyword “docker database php performace” - i only remember one tester who did the SQL reading and then writing performace with php - he achiaved ~ 50% slow down database access

BTW in this forum ater quick search I found this about dosker slow down

Yes. Access database on host from within container. Use extra_hosts.