Connection Issues Between FusionAuth and PostgreSQL in Kubernetes on Docker Desktop

I am running Kubernetes, which is enabled in the Docker Desktop application on my system. My application (FusionAuth) is running in pods, and I need to connect to a PostgreSQL database on my localhost. Both are on the same system, but I cannot establish a connection. I have also tried using the Docker bridge network.
For the database connection string, I attempted the following:
Server=host.docker.internal;Database=yourdatabase;User Id=yourusername;Password=yourpassword; & jdbc:postgresql://host.docker.internal:5432/yourdatabase?user=yourusername&password=yourpassword

here below my Yaml file

apiVersion: v1

kind: Namespace
metadata:
name: fusionauth # Create a namespace for FusionAuth

apiVersion: apps/v1
kind: Deployment
metadata:
name: fusionauth # Deployment for FusionAuth application
namespace: fusionauth
spec:
replicas: 1 # Number of FusionAuth replicas
selector:
matchLabels:
app: fusionauth # Label to identify the FusionAuth deployment
template:
metadata:
labels:
app: fusionauth # Labels for the pods
spec:
containers:

  • name: fusionauth
    image: fusionauth/fusionauth-app:latest # Latest FusionAuth image
    env:
  • name: DATABASE_URL
    value: jdbc:postgresql://host.docker.internal:5432/postgressql # Database connection string
  • name: DATABASE_USERNAME
    value: # PostgreSQL username
  • name: DATABASE_PASSWORD
    value: # PostgreSQL password
  • name: DATABASE_DRIVER_CLASS_NAME
    value: org.postgresql.Driver # PostgreSQL driver class name
    ports:
  • containerPort: 9011 # Exposing FusionAuth port
    resources:
    requests:
    memory: “512Mi” # Minimum memory request
    cpu: “500m” # Minimum CPU request
    limits:
    memory: “1Gi” # Maximum memory limit
    cpu: “1” # Maximum CPU limit

apiVersion: v1
kind: Service
metadata:
name: fusionauth # Service for FusionAuth
namespace: fusionauth
spec:
ports:

  • port: 9011 # Port to access FusionAuth
    targetPort: 9011 # Target port on the FusionAuth pod
    selector:
    app: fusionauth # Select pods with this label

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fusionauth-ingress
namespace: fusionauth
spec:
ingressClassName: nginx
rules:

  • http:
    paths:
    • path: /
      pathType: Prefix
      backend:
      service:
      name: fusionauth
      port:
      number: 9011

Please, format your post according to the following guide: How to format your forum posts
In short: please, use </> button to share codes, terminal outputs, error messages or anything that can contain special characters which would be interpreted by the MarkDown filter. Use the preview feature to make sure your text is formatted as you would expect it and check your post after you have sent it so you can still fix it.

Example code block:

```
services:
  service1:
    image: image1
```

After fixing your post, please send a new comment so people are notified about the fixed content.


I am running Kubernetes, which is enabled in the Docker Desktop application on my system. My application (Fusion AUTH) is running in pods, and I need to connect to a PostgreSQL database on my localhost. Both are installed on the same system, but I cannot establish a connection. I have also tried using the Docker bridge network.


For the database connection string, I attempted the following:

* Server=host.docker.internal;Database=yourdatabase;User Id=yourusername;Password=yourpassword;

* jdbc:postgresql://host.docker.internal:5432/yourdatabase?user=yourusername&password=yourpassword

* 127.0.0.1 kubernetes.docker.internal

*  host.docker.internal

Blow my yaml file

# Here my pod deployment file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fusionauth # Deployment for FusionAuth application
  namespace: fusionauth
spec:
  replicas: 1 # Number of FusionAuth replicas
  selector:
    matchLabels:
      app: fusionauth # Label to identify the FusionAuth deployment
  template:
    metadata:
      labels:
        app: fusionauth # Labels for the pods
    spec:
      containers:
      - name: fusionauth
        image: fusionauth/fusionauth-app:latest # Latest FusionAuth image
        env:
        - name: DATABASE_URL
          value: jdbc:postgresql://host.docker.internal:5432/postgressql # Database connection string
        - name: DATABASE_USERNAME
          value: postgresadmin # PostgreSQL username
        - name: DATABASE_PASSWORD
          value: PGSqlAdm1n@$2024 # PostgreSQL password
        - name: DATABASE_DRIVER_CLASS_NAME
          value: org.postgresql.Driver # PostgreSQL driver class name
        ports:
        - containerPort: 9011 # Exposing FusionAuth port
        resources:
          requests:
            memory: "512Mi" # Minimum memory request
            cpu: "500m" # Minimum CPU request
          limits:
            memory: "1Gi" # Maximum memory limit
            cpu: "1" # Maximum CPU limit

# Here k8s service
apiVersion: v1
kind: Service
metadata:
 name: fusionauth # Service for FusionAuth
 namespace: fusionauth
spec:
 ports:
 - port: 9011 # Port to access FusionAuth
   targetPort: 9011 # Target port on the FusionAuth pod
 selector:
   app: fusionauth

# here k8s ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fusionauth-ingress
namespace: fusionauth
spec:
ingressClassName: nginx
rules:
- http:
    paths:
    - path: /
      pathType: Prefix
      backend:
        service:
          name: fusionauth
          port:
            number: 9011

And what is the error message?

hi @rimelek my application is not connecting to localhost database

#here the error log

* Unable to connect to your database using either the superuser username and password or ordinary username and password you provided. Please verify your connection information. If it is correct, make sure the database is running before continuing.

* FusionAuth is in maintenance mode because your database is not ready, it is either not running or does not contain the FusionAuth database or tables.

# pod logs 

Defaulted container "fusionauth" out of: fusionauth, wait-for-db (init), wait-for-search (init)
Error from server (BadRequest): container "fusionauth" in pod "fusionauth-7dcb947cb8-8qpv6" is waiting to start: PodInitializing

Have you tried connectint to the database from the host?

Does it have the FusionAuth database and tables?

HI @rimelek

i am able to connect the database from my localhost , also verified there no table in database .

I attempted the following:

  • Server=host.docker.internal;Database=yourdatabase;User Id=yourusername;Password=yourpassword;
  • jdbc:postgresql://host.docker.internal:5432/yourdatabase?user=yourusername&password=yourpassword
  • localhost, 127.0.0.1 , 127.0.0.1 kubernetes.docker.internal, 172.17. 0.1
  • in database postgresconf file allowed listing * and added hosted 0.0.0.0/0 all md5
  • in windows host file added 127.0.0.1 kubernetes.docker.internal,
    127.0.0.1 host.docker.internal , also allowed the all tcp port in firewall

kindly consider this a priority

Thanks in advance

What is “hosted”? Is it a valid value in the config file? I know about “host”, but not “hosted”.

Note: I edited your post to make it more readable. Thank you for using code blocks, but you don’t have to put almost everything in it. You can share inline code as well and whenever you notice something looks weird because the forum changed it, edit the post to fix it with a code block or an inline code. So everything that is your thought can be a normal text without code block or inline code.

@rimelek

It’s a typing mistake; it should be host 0.0.0.0/0 all md5.

docker run -d   --name fusionauth   --add-host=host.docker.internal:host-gateway   -p 9011:9011   fusionauth/fusionauth-app:latest

When I run the container, it works fine. The FusionAuth application( container) connects to the localhost database, and I can see that the hosts file shows 192.168.65.254 host.docker.internal.

However, at the pod level, the hosts file shows 192.168.65.254 host.docker.internal, which is not found.

above my yaml posted for your references

kindly consider this a priority
Thanks in advance

There is no priority here

I’m actually not sure if host.docker.internal should work from a Kubernetes pod, but you can check (as you did) the gateway address from the container that you could start and use that IP directly for the connection instead of the host name.

If you think it should work in a Kubernetes pod (I would also expect it to work), you can report it on GitHub