Docker compose mongo exit code 139

Hi,

I’ve been running into this weird issue where my container mongo container keeps dying every 60 seconds.
When I run it seperately I can see it exits with code 139.

Error log:

mongodb  | {"t":{"$date":"2026-03-19T13:35:54.280+00:00"},"s":"I",  "c":"NETWORK",  "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":27},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":27},"outgoing":{"minWireVersion":6,"maxWireVersion":27},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":27},"incomingInternalClient":{"minWireVersion":27,"maxWireVersion":27},"outgoing":{"minWireVersion":27,"maxWireVersion":27},"isInternalClient":true}}}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.280+00:00"},"s":"I",  "c":"REPL",     "id":5853300, "ctx":"initandlisten","msg":"current featureCompatibilityVersion value","attr":{"featureCompatibilityVersion":"8.2","context":"startup"}}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.281+00:00"},"s":"I",  "c":"STORAGE",  "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.281+00:00"},"s":"I",  "c":"STORAGE",  "id":10682200,"ctx":"initandlisten","msg":"Dropping spill idents","attr":{"numIdents":0}}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.283+00:00"},"s":"I",  "c":"CONTROL",  "id":6608200, "ctx":"initandlisten","msg":"Initializing cluster server parameters from disk"}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.283+00:00"},"s":"I",  "c":"CONTROL",  "id":20536,   "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.283+00:00"},"s":"I",  "c":"FTDC",     "id":20625,   "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.284+00:00"},"s":"I",  "c":"REPL",     "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigReplicationDisabled","oldState":"ConfigPreStart"}}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.284+00:00"},"s":"I",  "c":"STORAGE",  "id":22262,   "ctx":"initandlisten","msg":"Timestamp monitor starting"}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.284+00:00"},"s":"I",  "c":"STORAGE",  "id":7333401, "ctx":"initandlisten","msg":"Starting the DiskSpaceMonitor"}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.285+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.285+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0:27017"}}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.285+00:00"},"s":"I",  "c":"NETWORK",  "id":23016,   "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
mongodb  | {"t":{"$date":"2026-03-19T13:35:54.285+00:00"},"s":"I",  "c":"CONTROL",  "id":8423403, "ctx":"initandlisten","msg":"mongod startup complete","attr":{"Summary of time elapsed":{"Startup from clean shutdown?":false,"Statistics":{"setUpPeriodicRunnerMillis":0,"setUpOCSPMillis":0,"setUpTransportLayerMillis":0,"initSyncCrashRecoveryMillis":0,"createLockFileMillis":0,"getStorageEngineMetadataMillis":0,"validateMetadataMillis":0,"createStorageEngineMillis":626,"writePIDMillis":7,"initializeFCVForIndexMillis":1,"dropAbandonedIdentsMillis":0,"standaloneClusterParamsMillis":0,"userAndRolesGraphMillis":0,"waitForMajorityServiceMillis":0,"startUpReplCoordMillis":0,"recoverChangeStreamMillis":0,"logStartupOptionsMillis":0,"startUpTransportLayerMillis":0,"initAndListenTotalMillis":647}}}}
mongodb  | {"t":{"$date":"2026-03-19T13:35:55.062+00:00"},"s":"I",  "c":"FTDC",     "id":20631,   "ctx":"ftdc","msg":"Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost","attr":{"error":{"code":0,"codeName":"OK"}}}
mongodb  | {"t":{"$date":"2026-03-19T13:36:54.280+00:00"},"s":"I",  "c":"WTCHKPT",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":{"ts_sec":1773927414,"ts_usec":280185,"thread":"1:0x7f9a6591a6c0","session_name":"WT_SESSION.checkpoint","category":"WT_VERB_CHECKPOINT_PROGRESS","log_id":1000000,"category_id":7,"verbose_level":"INFO","verbose_level_id":0,"msg":"saving checkpoint snapshot min: 5, snapshot max: 5 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 226"}}}
mongodb exited with code 139 (restarting)


My docker-compose file:

services:
  mysql:
    image: mysql:8.0
    container_name: mysql_server
    command: --default-authentication-plugin=mysql_native_password --max_allowed_packet=128M
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: root
    ports:
      - "3306:3306"
    volumes:
      - db_data:/var/lib/mysql
    networks:
      - global-network

  mongodb:
    image: mongo:latest
    container_name: mongodb
    restart: always
    environment:
      MONGODB_DISABLE_TELEMETRY: true
    ports:
      - "27017:27017"
    volumes:
      - /home/sven/data/:/data/db
    networks:
      - global-network
    deploy:
      resources:
        limits:
          memory: 4G
        reservations:
          memory: 2G

  adminer:
    image: adminer
    container_name: adminer
    restart: always
    networks:
      - global-network
    environment:
      - ADMINER_DEFAULT_SERVER=mysql_server
      - UPLOAD_MAX_FILESIZE=128M
      - POST_MAX_SIZE=128M
      - MEMORY_LIMIT=512M

  redis:
    image: redis:alpine
    container_name: redis_server
    restart: always
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data
    networks:
      - global-network

  redis-commander:
    image: rediscommander/redis-commander:latest
    container_name: redis_commander
    restart: always
    environment:
      - REDIS_HOSTS=local:redis_server:6379
    networks:
      - global-network


volumes:
  db_data:
  redis_data:

networks:
  global-network:
    external: true

My operating system:

Operating System: Fedora Linux 43
KDE Plasma Version: 6.6.2
KDE Frameworks Version: 6.24.0
Kernel Version: 6.19.7-200.fc43.x86_64 (64-bit)
Processors: 8 × Intel® Core™ i7-6700K CPU @ 4.00GHz
Memory: 32 GiB of RAM (31.3 GiB usable)

If any additional information is required let me know.

I’m having the same problem with mongo:latest and am also running Fedora 43 with podman. Haven’t found a solution yet but I’ve tried wiping out my mongo data and also going back to an older version but the same thing happens.

According to the internet Exit code 139 is a memory issue related error.

@mrtsven Does it happen with an older version or just with latest?

Which was that odler version?

Podman is not Docker, so if this is really the same issue, I’m surprised an older image version did not fix it. @mrtsven seems to use Docker and the chance that both Docker and Podman has the same issue with Podman is low, unless the issue is in MongoDB or in the Linux kernel Can you show the same environment details shared by mrtsven?.

Here’s my logs for when the container restarts:

Mar 19 14:56:15 tako mongodb[65584]: {"t":{"$date":"2026-03-19T21:56:15.010+00:00"},"s":"I",  "c":"FTDC",     "id":20631,   "ctx":"ftdc","msg":"Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost","attr":{"error":{"code":0,"codeName":"OK"}}}
Mar 19 14:56:44 tako podman[66176]: 2026-03-19 14:56:44.112739051 -0700 PDT m=+0.026715151 container died b1281521474eec024096df22119ec8e2f05777c91fa867c440995523c02781a1 (image=docker.io/library/mongo:latest, name=mongodb, PODMAN_SYSTEMD_UNIT=mongodb.service, io.containers.autoupdate=registry, org.opencontainers.image.ref.name=ubuntu, org.opencontainers.image.version=24.04)
Mar 19 14:56:44 tako podman[66176]: 2026-03-19 14:56:44.246124583 -0700 PDT m=+0.160100696 container remove b1281521474eec024096df22119ec8e2f05777c91fa867c440995523c02781a1 (image=docker.io/library/mongo:latest, name=mongodb, PODMAN_SYSTEMD_UNIT=mongodb.service, io.containers.autoupdate=registry, org.opencontainers.image.ref.name=ubuntu, org.opencontainers.image.version=24.04)
Mar 19 14:56:44 tako podman[66176]: 2026-03-19 14:56:44.258228282 -0700 PDT m=+0.172204412 volume remove 4469b90fec056c19ad2e52512c21dd06395d684ac30244bb4587ba837605be9a
Mar 19 14:56:44 tako systemd[1]: mongodb.service: Main process exited, code=exited, status=139/n/a
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ An ExecStart= process belonging to unit mongodb.service has exited.
░░ 
░░ The process' exit code is 'exited' and its exit status is 139.
Mar 19 14:56:44 tako systemd[1]: mongodb.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ The unit mongodb.service has entered the 'failed' state with result 'exit-code'.
Mar 19 14:56:44 tako systemd[1]: mongodb.service: Consumed 1.056s CPU time, 205.6M memory peak.
░░ Subject: Resources consumed by unit runtime
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ The unit mongodb.service completed and consumed the indicated resources.
Mar 19 14:56:44 tako systemd[1]: mongodb.service: Scheduled restart job, restart counter is at 5.

In my case it seems like podman is finding that the container has died and keeps restarting it. I am running the container as a system.d service using quadlet.

The old version I tried unsuccessfully was mongo:8.2.5. I did have a memory limit specified on the container of 1gb but removing this had no effect. The logs indicate it was only using 205.6M anyways so I don’t think that is the issue here.

Here’s my operating system details:

Fedora Linux 43
Kernel Version: 6.19.8-200.fc43.x86_64
Processor: 13th Gen Intel(R) Core(TM) i7-1360P
Memory: 64 GiB of RAM (51.8 GiB usable)

I have the same issue, also on Fedora 43. I have tested with all versions until I had a version mismatch. None of them worked.

Hi @rimelek, first of thanks for looking into this!

Could you specify the older version of what? As in Linux, Docker or the image?

I only tried switching the image from latest to noble. But that had no effect.

I’ll try some older versions of the docker image to see if that helps.

So I have tried running mongo:8.0 but that produces the same exit code 139.

Attaching to mongodb
mongodb  | {"t":{"$date":"2026-03-20T11:07:37.999+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
mongodb  | {"t":{"$date":"2026-03-20T11:07:37.999+00:00"},"s":"I",  "c":"CONTROL",  "id":5945603, "ctx":"main","msg":"Multi threading initialized"}
mongodb  | {"t":{"$date":"2026-03-20T11:07:37.999+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set at least one of the related parameters","attr":{"relatedParameters":["tcpFastOpenServer","tcpFastOpenClient","tcpFastOpenQueueSize"]}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.000+00:00"},"s":"I",  "c":"NETWORK",  "id":4915701, "ctx":"main","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":25},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":25},"outgoing":{"minWireVersion":6,"maxWireVersion":25},"isInternalClient":true}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.001+00:00"},"s":"I",  "c":"TENANT_M", "id":7091600, "ctx":"main","msg":"Starting TenantMigrationAccessBlockerRegistry"}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.001+00:00"},"s":"I",  "c":"CONTROL",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"39d72f5aee59"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.001+00:00"},"s":"W",  "c":"CONTROL",  "id":20720,   "ctx":"initandlisten","msg":"Memory available to mongo process is less than total system memory","attr":{"availableMemSizeMB":4096,"systemMemSizeMB":32038}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.001+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"8.0.20","gitVersion":"28927c60881a488fcbc5fd4d925b410f33258827","openSSLVersion":"OpenSSL 3.0.13 30 Jan 2024","modules":[],"allocator":"tcmalloc-google","environment":{"distmod":"ubuntu2404","distarch":"x86_64","target_arch":"x86_64"}}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.001+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"24.04"}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.001+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.003+00:00"},"s":"I",  "c":"STORAGE",  "id":22315,   "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=1536M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],prefetch=(available=true,default=false),"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.422+00:00"},"s":"I",  "c":"WTRECOV",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":{"ts_sec":1774004858,"ts_usec":422653,"thread":"1:0x7fbae6544280","session_name":"txn-recover","category":"WT_VERB_RECOVERY_PROGRESS","log_id":1000000,"category_id":34,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"recovery log replay has successfully finished and ran for 0 milliseconds"}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.422+00:00"},"s":"I",  "c":"WTRECOV",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":{"ts_sec":1774004858,"ts_usec":422732,"thread":"1:0x7fbae6544280","session_name":"txn-recover","category":"WT_VERB_RECOVERY_PROGRESS","log_id":1000000,"category_id":34,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"Set global recovery timestamp: (0, 0)"}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.422+00:00"},"s":"I",  "c":"WTRECOV",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":{"ts_sec":1774004858,"ts_usec":422762,"thread":"1:0x7fbae6544280","session_name":"txn-recover","category":"WT_VERB_RECOVERY_PROGRESS","log_id":1000000,"category_id":34,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"Set global oldest timestamp: (0, 0)"}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.422+00:00"},"s":"I",  "c":"WTRECOV",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":{"ts_sec":1774004858,"ts_usec":422810,"thread":"1:0x7fbae6544280","session_name":"txn-recover","category":"WT_VERB_RECOVERY","log_id":1493201,"category_id":33,"verbose_level":"INFO","verbose_level_id":0,"msg":"recovery was completed successfully and took 0ms, including 0ms for the log replay, 0ms for the rollback to stable, and 0ms for the checkpoint."}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.422+00:00"},"s":"I",  "c":"WTRECOV",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":{"ts_sec":1774004858,"ts_usec":422848,"thread":"1:0x7fbae6544280","session_name":"txn-recover","category":"WT_VERB_RECOVERY_PROGRESS","log_id":1493201,"category_id":34,"verbose_level":"INFO","verbose_level_id":0,"msg":"recovery was completed successfully and took 0ms, including 0ms for the log replay, 0ms for the rollback to stable, and 0ms for the checkpoint."}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.436+00:00"},"s":"I",  "c":"STORAGE",  "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":433}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.436+00:00"},"s":"I",  "c":"RECOVERY", "id":23987,   "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.472+00:00"},"s":"I",  "c":"STORAGE",  "id":9529901, "ctx":"initandlisten","msg":"Initializing durable catalog","attr":{"numRecords":0}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.473+00:00"},"s":"I",  "c":"STORAGE",  "id":9529902, "ctx":"initandlisten","msg":"Retrieving all idents from storage engine"}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.473+00:00"},"s":"I",  "c":"STORAGE",  "id":9529903, "ctx":"initandlisten","msg":"Initializing all collections in durable catalog","attr":{"numEntries":0}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.478+00:00"},"s":"W",  "c":"CONTROL",  "id":22120,   "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.478+00:00"},"s":"W",  "c":"CONTROL",  "id":22184,   "ctx":"initandlisten","msg":"Soft rlimits for open file descriptors too low","attr":{"currentValue":1024,"recommendedMinimum":64000},"tags":["startupWarnings"]}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.478+00:00"},"s":"W",  "c":"CONTROL",  "id":9068900, "ctx":"initandlisten","msg":"For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile","attr":{"allocator":"tcmalloc-google","sysfsFile":"/sys/kernel/mm/transparent_hugepage/enabled","currentValue":"never","desiredValue":"always"},"tags":["startupWarnings"]}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.478+00:00"},"s":"W",  "c":"CONTROL",  "id":9068900, "ctx":"initandlisten","msg":"For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile","attr":{"allocator":"tcmalloc-google","sysfsFile":"/sys/kernel/mm/transparent_hugepage/defrag","currentValue":"madvise","desiredValue":"defer+madvise"},"tags":["startupWarnings"]}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.478+00:00"},"s":"W",  "c":"CONTROL",  "id":8640302, "ctx":"initandlisten","msg":"We suggest setting the contents of sysfsFile to 0.","attr":{"sysfsFile":"/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none","currentValue":511},"tags":["startupWarnings"]}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.478+00:00"},"s":"W",  "c":"CONTROL",  "id":8386700, "ctx":"initandlisten","msg":"We suggest setting swappiness to 0 or 1, as swapping can cause performance problems.","attr":{"sysfsFile":"/proc/sys/vm/swappiness","currentValue":10},"tags":["startupWarnings"]}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.479+00:00"},"s":"I",  "c":"STORAGE",  "id":20320,   "ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"admin.system.version","uuidDisposition":"provided","uuid":{"uuid":{"$uuid":"4e57fc00-6ca0-49ca-848d-591a082e4c3b"}},"options":{"uuid":{"$uuid":"4e57fc00-6ca0-49ca-848d-591a082e4c3b"}}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.493+00:00"},"s":"I",  "c":"INDEX",    "id":20345,   "ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"collectionUUID":{"uuid":{"$uuid":"4e57fc00-6ca0-49ca-848d-591a082e4c3b"}},"namespace":"admin.system.version","index":"_id_","ident":"index-1-5612326249008747962","collectionIdent":"collection-0-5612326249008747962","commitTimestamp":null}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.493+00:00"},"s":"I",  "c":"REPL",     "id":20459,   "ctx":"initandlisten","msg":"Setting featureCompatibilityVersion","attr":{"newVersion":"8.0"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.493+00:00"},"s":"I",  "c":"REPL",     "id":5853300, "ctx":"initandlisten","msg":"current featureCompatibilityVersion value","attr":{"featureCompatibilityVersion":"8.0","context":"setFCV"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.493+00:00"},"s":"I",  "c":"NETWORK",  "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":25},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":25},"outgoing":{"minWireVersion":6,"maxWireVersion":25},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":25},"incomingInternalClient":{"minWireVersion":25,"maxWireVersion":25},"outgoing":{"minWireVersion":25,"maxWireVersion":25},"isInternalClient":true}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.494+00:00"},"s":"I",  "c":"NETWORK",  "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":25},"incomingInternalClient":{"minWireVersion":25,"maxWireVersion":25},"outgoing":{"minWireVersion":25,"maxWireVersion":25},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":25},"incomingInternalClient":{"minWireVersion":25,"maxWireVersion":25},"outgoing":{"minWireVersion":25,"maxWireVersion":25},"isInternalClient":true}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.494+00:00"},"s":"I",  "c":"REPL",     "id":5853300, "ctx":"initandlisten","msg":"current featureCompatibilityVersion value","attr":{"featureCompatibilityVersion":"8.0","context":"startup"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.494+00:00"},"s":"I",  "c":"STORAGE",  "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.494+00:00"},"s":"I",  "c":"CONTROL",  "id":6608200, "ctx":"initandlisten","msg":"Initializing cluster server parameters from disk"}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.494+00:00"},"s":"I",  "c":"CONTROL",  "id":20536,   "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.494+00:00"},"s":"I",  "c":"FTDC",     "id":20625,   "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.495+00:00"},"s":"I",  "c":"STORAGE",  "id":20320,   "ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"local.startup_log","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"6aa2d913-b896-43c0-9bef-5041a387fdfe"}},"options":{"capped":true,"size":10485760}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.514+00:00"},"s":"I",  "c":"INDEX",    "id":20345,   "ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"collectionUUID":{"uuid":{"$uuid":"6aa2d913-b896-43c0-9bef-5041a387fdfe"}},"namespace":"local.startup_log","index":"_id_","ident":"index-3-5612326249008747962","collectionIdent":"collection-2-5612326249008747962","commitTimestamp":null}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.514+00:00"},"s":"I",  "c":"REPL",     "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigReplicationDisabled","oldState":"ConfigPreStart"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.514+00:00"},"s":"I",  "c":"STORAGE",  "id":22262,   "ctx":"initandlisten","msg":"Timestamp monitor starting"}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.514+00:00"},"s":"I",  "c":"STORAGE",  "id":7333401, "ctx":"initandlisten","msg":"Starting the DiskSpaceMonitor"}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.515+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.515+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0:27017"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.515+00:00"},"s":"I",  "c":"NETWORK",  "id":23016,   "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.515+00:00"},"s":"I",  "c":"CONTROL",  "id":8423403, "ctx":"initandlisten","msg":"mongod startup complete","attr":{"Summary of time elapsed":{"Startup from clean shutdown?":true,"Statistics":{"Set up periodic runner":"0 ms","Set up online certificate status protocol manager":"0 ms","Transport layer setup":"0 ms","Run initial syncer crash recovery":"0 ms","Create storage engine lock file in the data directory":"0 ms","Get metadata describing storage engine":"0 ms","Create storage engine":"471 ms","Write current PID to file":"0 ms","Write a new metadata for storage engine":"0 ms","Initialize FCV before rebuilding indexes":"1 ms","Drop abandoned idents and get back indexes that need to be rebuilt or builds that need to be restarted":"0 ms","Rebuild indexes for collections":"0 ms","Load cluster parameters from disk for a standalone":"0 ms","Build user and roles graph":"0 ms","Set up the background thread pool responsible for waiting for opTimes to be majority committed":"0 ms","Start up the replication coordinator":"0 ms","Ensure the change stream collections on startup contain consistent data":"0 ms","Write startup options to the audit log":"0 ms","Start transport layer":"1 ms","_initAndListen total elapsed time":"514 ms"}}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.516+00:00"},"s":"I",  "c":"CONTROL",  "id":20712,   "ctx":"LogicalSessionCacheReap","msg":"Sessions collection is not set up; waiting until next sessions reap interval","attr":{"error":"NamespaceNotFound: config.system.sessions does not exist"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.516+00:00"},"s":"I",  "c":"STORAGE",  "id":20320,   "ctx":"LogicalSessionCacheRefresh","msg":"createCollection","attr":{"namespace":"config.system.sessions","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"a7667f81-3dae-4793-92a8-578c7bb31a0b"}},"options":{}}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.540+00:00"},"s":"I",  "c":"INDEX",    "id":20345,   "ctx":"LogicalSessionCacheRefresh","msg":"Index build: done building","attr":{"buildUUID":null,"collectionUUID":{"uuid":{"$uuid":"a7667f81-3dae-4793-92a8-578c7bb31a0b"}},"namespace":"config.system.sessions","index":"_id_","ident":"index-5-5612326249008747962","collectionIdent":"collection-4-5612326249008747962","commitTimestamp":null}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:38.540+00:00"},"s":"I",  "c":"INDEX",    "id":20345,   "ctx":"LogicalSessionCacheRefresh","msg":"Index build: done building","attr":{"buildUUID":null,"collectionUUID":{"uuid":{"$uuid":"a7667f81-3dae-4793-92a8-578c7bb31a0b"}},"namespace":"config.system.sessions","index":"lsidTTLIndex","ident":"index-6-5612326249008747962","collectionIdent":"collection-4-5612326249008747962","commitTimestamp":null}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:39.007+00:00"},"s":"W",  "c":"CONTROL",  "id":636300,  "ctx":"ftdc","msg":"Use of deprecated server parameter name","attr":{"deprecatedName":"internalQueryCacheSize","canonicalName":"internalQueryCacheMaxEntriesPerCollection"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:39.007+00:00"},"s":"W",  "c":"CONTROL",  "id":636300,  "ctx":"ftdc","msg":"Use of deprecated server parameter name","attr":{"deprecatedName":"oplogSamplingLogIntervalSeconds","canonicalName":"collectionSamplingLogIntervalSeconds"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:39.007+00:00"},"s":"W",  "c":"NETWORK",  "id":23803,   "ctx":"ftdc","msg":"Use of deprecated server parameter 'sslMode', please use 'tlsMode' instead."}
mongodb  | {"t":{"$date":"2026-03-20T11:07:39.007+00:00"},"s":"W",  "c":"CONTROL",  "id":636300,  "ctx":"ftdc","msg":"Use of deprecated server parameter name","attr":{"deprecatedName":"wiredTigerConcurrentReadTransactions","canonicalName":"storageEngineConcurrentReadTransactions"}}
mongodb  | {"t":{"$date":"2026-03-20T11:07:39.007+00:00"},"s":"W",  "c":"CONTROL",  "id":636300,  "ctx":"ftdc","msg":"Use of deprecated server parameter name","attr":{"deprecatedName":"wiredTigerConcurrentWriteTransactions","canonicalName":"storageEngineConcurrentWriteTransactions"}}
mongodb exited with code 139

I can confirm - Fedora 43 workstation, mongo:latest (8) produces 139 after a few seconds of startup.

I switched back to 6 for now.

I came across this issue as well and with the help of my colleagues we found out that mongodb 8 doesn’t support kernel version 6.19.
Ref. Release Notes for MongoDB 8.0 - Database Manual - MongoDB Docs for more details.

To fix it you would either have to use another mongodb version or downgrade to a lower kernel level.

To downgrade you can do the following:

Get your available kernel versions:
sudo grubby --info=ALL | grep -E "index=|kernel="

Change your default kernel by index (change index to for example 6.18)
sudo grubby --set-default-index=1

Verify
sudo grubby --default-index

Reboot
sudo systemctrl reboot

2 Likes

Instead of downgrading a kernel version a college of mine jumped in.

Setting environment: - GLIBC_TUNABLES=glibc.pthread.rseq=1 fixed it for me:

services:
  mongodb:
    image: mongo:latest
    container_name: mongodb
    environment:
      - GLIBC_TUNABLES=glibc.pthread.rseq=1  # Or remove the variable if it was set to 0 elsewhere
    ports:
      - "27017:27017"
    restart: always

As to why this fixes it exactly I’m not sure. I’m sure other people who are far smarter can explain this.

@kjetilsigvartsen is right this is currently an issue due to the incompattibility of mongo 8.0 and kernel 6.19

I’m guessing that downgrading your kernel to 6.18 will also work.

1 Like