Skip to content

MongoDB

Deploy MongoDB on Kubernetes using the official mongo Docker image. Supports standalone single-instance, replica set for high availability, and sharded cluster for horizontal scalability.

Replica set keyFile must not change — rotating it breaks internal member authentication

Replica set members authenticate each other using a shared keyFile (auth.replicaSetKey). If the key is not persisted via auth.existingKeySecret, it is auto-generated on first deployment. Rotating or regenerating this key after initialization will cause all replica set members to reject each other, resulting in a split-brain or total replica set failure. Always persist it via auth.existingKeySecret before the first replica set deployment.

Auth credentials are set on first initialization only — they cannot be changed via values on upgrade

auth.rootPassword and user passwords defined in auth.users are written during the first container initialization (/docker-entrypoint-initdb.d/). Changing these values in a subsequent helm upgrade has no effect on the running container. To rotate credentials, connect to MongoDB directly using the mongo shell (db.updateUser() / db.changeUserPassword()).

Key Features

  • Three architectures — standalone, replica set, sharded cluster
  • Replica set HArs0 with configurable member count and optional arbiter
  • Sharded cluster — mongos routers, config servers, and data shards
  • Internal keyFile auth — replica set member authentication via shared secret
  • Init scripts.js and .sh files via /docker-entrypoint-initdb.d/
  • WiredTiger tuning — custom mongod.conf via config block
  • Prometheus exporterpercona/mongodb_exporter sidecar with ServiceMonitor
  • mongodump backup — scheduled S3 backup CronJob

Installation

HTTPS repository:

helm repo add helmforge https://repo.helmforge.dev
helm repo update
helm install my-mongo helmforge/mongodb -f values.yaml

OCI registry:

helm install my-mongo oci://ghcr.io/helmforgedev/helm/mongodb -f values.yaml

Deployment Examples

# values.yaml — MongoDB standalone (single instance)
architecture: standalone

auth:
  enabled: true
  rootUser: root
  existingSecret: mongodb-root-credentials # keys: mongodb-root-username, mongodb-root-password
  users:
    - username: appuser
      password: 'app-password'
      database: myapp
      roles:
        - { role: readWrite, db: myapp }

persistence:
  enabled: true
  size: 20Gi

config:
  storage:
    wiredTiger:
      engineConfig:
        cacheSizeGB: 1 # set to ~50% of available memory minus OS overhead

metrics:
  enabled: true
  serviceMonitor:
    enabled: true
# values.yaml — MongoDB replica set (3 members, HA)
architecture: replicaset

auth:
  enabled: true
  existingSecret: mongodb-root-credentials
  existingKeySecret: mongodb-replica-key # key: mongodb-replica-set-key (must not change)

replicaSet:
  name: rs0
  members: 3

persistence:
  enabled: true
  size: 50Gi

config:
  storage:
    wiredTiger:
      engineConfig:
        cacheSizeGB: 2

metrics:
  enabled: true
  serviceMonitor:
    enabled: true
# Connection string for replica set:
# mongodb://appuser:[email protected]:27017/myapp?replicaSet=rs0
# values.yaml — Replica set with arbiter (2 data members + 1 arbiter)
# Use when you want odd-number election votes without a 3rd data-bearing member
architecture: replicaset

auth:
  enabled: true
  existingSecret: mongodb-root-credentials
  existingKeySecret: mongodb-replica-key

replicaSet:
  name: rs0
  members: 2 # 2 data-bearing members; arbiter provides the 3rd vote

arbiter:
  enabled: true # lightweight pod: votes in elections but stores no data

persistence:
  enabled: true
  size: 50Gi
When to use an arbiter

An arbiter participates in elections but holds no data. Use it when you have an even number of data-bearing members to ensure a majority for elections. With 3 data members, an arbiter is not needed.

# values.yaml — MongoDB sharded cluster (2 shards, 3 members each)
architecture: sharded

auth:
  enabled: true
  existingSecret: mongodb-root-credentials
  existingKeySecret: mongodb-replica-key # shared across all replica sets in the cluster

sharded:
  mongos:
    replicaCount: 2 # query routing layer
    port: 27017
  configServer:
    replicaCount: 3 # cluster metadata (must be odd)
    persistence:
      size: 10Gi
  shards:
    count: 2 # number of shards
    membersPerShard: 3 # members per shard replica set
    persistence:
      size: 100Gi

config:
  storage:
    wiredTiger:
      engineConfig:
        cacheSizeGB: 4
Connect to a sharded cluster via mongos, not shard members directly

Always connect to the mongos service, not to individual shard replica set members. Shards should only be accessed directly for administrative operations.

Configuration Reference

Image

ParameterTypeDefaultDescription
image.repositorystringdocker.io/library/mongoMongoDB image.
image.tagstring"8.2.6"Image tag.

Authentication

ParameterTypeDefaultDescription
auth.enabledbooleantrueEnable MongoDB --auth flag.
auth.rootUserstringrootRoot username (MONGO_INITDB_ROOT_USERNAME).
auth.rootPasswordstring""Root password. Auto-generated if empty. Set only on first deployment.
auth.existingSecretstring""Existing secret. Keys: mongodb-root-username, mongodb-root-password.
auth.replicaSetKeystring""KeyFile for internal replica set auth. Auto-generated if empty.
auth.existingKeySecretstring""Existing secret with keyFile. Key: mongodb-replica-set-key. Persist before first replica set deployment.
auth.usersarray[]Additional users to create on first startup.

Architecture

ParameterTypeDefaultDescription
architecturestringstandalonestandalone, replicaset, or sharded.
replicaSet.namestringrs0Replica set name.
replicaSet.membersinteger3Number of data-bearing replica set members.
arbiter.enabledbooleanfalseAdd an arbiter pod (votes in elections, stores no data).

Sharded Cluster

ParameterTypeDefaultDescription
sharded.mongos.replicaCountinteger2Number of mongos router pods.
sharded.configServer.replicaCountinteger3Config server members (must be odd).
sharded.configServer.persistence.sizestring8GiConfig server PVC size.
sharded.shards.countinteger2Number of shards.
sharded.shards.membersPerShardinteger3Replica set members per shard.
sharded.shards.persistence.sizestring16GiData PVC size per shard member.

Persistence and Configuration

ParameterTypeDefaultDescription
persistence.enabledbooleantrueEnable PVC for data (StatefulSet volumeClaimTemplate).
persistence.sizestring8GiPVC size per member.
configobject{}Custom mongod.conf content (WiredTiger cache, profiling).
portinteger27017MongoDB listen port.
initdbScriptsobject{}.js/.sh init scripts via /docker-entrypoint-initdb.d/.
Set WiredTiger cache to ~50% of available memory

MongoDB’s WiredTiger engine defaults to 50% of available RAM for its cache. Always set resources.limits.memory and tune config.storage.wiredTiger.engineConfig.cacheSizeGB explicitly to avoid OOM kills. For example, a pod with 4 GiB memory limit should use cacheSizeGB: 1.5 (leaving headroom for OS and index overhead).

Metrics

ParameterTypeDefaultDescription
metrics.enabledbooleanfalseDeploy percona/mongodb_exporter sidecar.
metrics.portinteger9216Exporter metrics port.
metrics.serviceMonitor.enabledbooleanfalseCreate Prometheus ServiceMonitor resource.

Backup

Backup uses mongodump (not pg_dump). All databases are dumped and archived.

ParameterTypeDefaultDescription
backup.enabledbooleanfalseEnable scheduled mongodump S3 backup.
backup.schedulestring"0 3 * * *"Cron schedule.
backup.s3.endpointstring""S3-compatible endpoint URL.
backup.s3.bucketstring""Target bucket name.
backup.s3.existingSecretstring""Existing secret with S3 credentials.
backup.database.mongoDumpArgsstring""Extra mongodump arguments.
extraManifestsarray[]Extra Kubernetes manifests.

Upgrade Notes

Switching from standalone to replicaset requires a full dump and restore

MongoDB cannot automatically convert a standalone instance to a replica set with data. You must:

  1. Dump the data with mongodump.
  2. Deploy the chart in replicaset mode.
  3. Restore data with mongorestore.

Changing replicaSet.members on an existing replica set triggers a reconfiguration. Scaling down removes members — ensure no data is exclusively on the removed members.

More Information