tree: 98964b45897849ca13b83a31cdbe2130a61eae5b [path history] [tgz]
  1. common/
  2. templates/
  3. .helmignore
  4. Chart.yaml
  5. README.md
  6. values.schema.json
  7. values.yaml
kubernetes/common/mongodb/README.md

MongoDB(R) packaged by Bitnami

MongoDB(R) is a relational open source NoSQL database. Easy to use, it stores data in JSON-like documents. Automated scalability and high-performance. Ideal for developing cloud native applications.

Overview of MongoDB®

Disclaimer: The respective trademarks mentioned in the offering are owned by the respective companies. We do not provide a commercial license for any of these products. This listing has an open-source license. MongoDB(R) is run and maintained by MongoDB, which is a completely separate project from Bitnami.

TL;DR

helm install my-release oci://registry-1.docker.io/bitnamicharts/mongodb

Looking to use MongoDBreg; in production? Try VMware Tanzu Application Catalog, the enterprise edition of Bitnami Application Catalog.

Introduction

This chart bootstraps a MongoDB(®) deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/mongodb

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The command deploys MongoDB(®) on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-release deployment:

helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Architecture

This chart allows installing MongoDB(®) using two different architecture setups: standalone or replicaset. Use the architecture parameter to choose the one to use:

architecture="standalone"
architecture="replicaset"

Standalone architecture

The standalone architecture installs a deployment (or StatefulSet) with one MongoDB® server (it cannot be scaled):

     ----------------
    |   MongoDB® |
    |      svc       |
     ----------------
            |
            v
       ------------
      |MongoDB®|
      |   Server   |
      |    Pod     |
       -----------

Replicaset architecture

The chart also supports the replicaset architecture with and without a MongoDB(®) Arbiter:

When the MongoDB(®) Arbiter is enabled, the chart installs two StatefulSets: A StatefulSet with N MongoDB(®) servers (organised with one primary and N-1 secondary nodes), and a StatefulSet with one MongoDB(®) arbiter node (it cannot be scaled).

     ----------------   ----------------   ----------------      -------------
    | MongoDB® 0 | | MongoDB® 1 | | MongoDB® N |    |   Arbiter   |
    |  external svc  | |  external svc  | |  external svc  |    |     svc     |
     ----------------   ----------------   ----------------      -------------
            |                  |                  |                    |
            v                  v                  v                    v
     ----------------   ----------------   ----------------      --------------
    | MongoDB® 0 | | MongoDB® 1 | | MongoDB® N |    | MongoDB® |
    |    Server      | |     Server     | |     Server     |    |    Arbiter   |
    |     Pod        | |      Pod       | |      Pod       |    |     Pod      |
     ----------------   ----------------   ----------------      --------------
          primary           secondary         secondary

The PSA model is useful when the third Availability Zone cannot hold a full MongoDB(®) instance. The MongoDB(®) Arbiter as decision maker is lightweight and can run alongside other workloads.

NOTE: An update takes your MongoDB(®) replicaset offline if the Arbiter is enabled and the number of MongoDB(®) replicas is two. Helm applies updates to the StatefulSets for the MongoDB(®) instance and the Arbiter at the same time so you lose two out of three quorum votes.

Without the Arbiter, the chart deploys a single statefulset with N MongoDB(®) servers (organised with one primary and N-1 secondary nodes).

     ----------------   ----------------   ----------------
    | MongoDB® 0 | | MongoDB® 1 | | MongoDB® N |
    |  external svc  | |  external svc  | |  external svc  |
     ----------------   ----------------   ----------------
            |                  |                  |
            v                  v                  v
     ----------------   ----------------   ----------------
    | MongoDB® 0 | | MongoDB® 1 | | MongoDB® N |
    |    Server      | |     Server     | |     Server     |
    |     Pod        | |      Pod       | |      Pod       |
     ----------------   ----------------   ----------------
          primary           secondary         secondary

There are no services load balancing requests between MongoDB(®) nodes; instead, each node has an associated service to access them individually.

NOTE: Although the first replica is initially assigned the primary role, any of the secondary nodes can become the primary if it is down, or during upgrades. Do not make any assumption about what replica has the primary role. Instead, configure your MongoDB(®) client with the list of MongoDB(®) hostnames so it can dynamically choose the node to send requests.

Parameters

Global parameters

NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.storageClassGlobal StorageClass for Persistent Volume(s)""
global.namespaceOverrideOverride the namespace for resource deployed by the chart, but can itself be overridden by the local namespaceOverride""

Common parameters

NameDescriptionValue
nameOverrideString to partially override mongodb.fullname template (will maintain the release name)""
fullnameOverrideString to fully override mongodb.fullname template""
namespaceOverrideString to fully override common.names.namespace""
kubeVersionForce target Kubernetes version (using Helm capabilities if not set)""
clusterDomainDefault Kubernetes cluster domaincluster.local
extraDeployArray of extra objects to deploy with the release[]
commonLabelsAdd labels to all the deployed resources (sub-charts are not considered). Evaluated as a template{}
commonAnnotationsCommon annotations to add to all Mongo resources (sub-charts are not considered). Evaluated as a template{}
topologyKeyOverride common lib default topology key. If empty - "kubernetes.io/hostname" is used""
serviceBindings.enabledCreate secret for service binding (Experimental)false
enableServiceLinksWhether information about services should be injected into pod's environment variabletrue
diagnosticMode.enabledEnable diagnostic mode (all probes will be disabled and the command will be overridden)false
diagnosticMode.commandCommand to override all containers in the deployment["sleep"]
diagnosticMode.argsArgs to override all containers in the deployment["infinity"]

MongoDB(®) parameters

NameDescriptionValue
image.registryMongoDB(®) image registryREGISTRY_NAME
image.repositoryMongoDB(®) image registryREPOSITORY_NAME/mongodb
image.digestMongoDB(®) image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
image.pullPolicyMongoDB(®) image pull policyIfNotPresent
image.pullSecretsSpecify docker-registry secret names as an array[]
image.debugSet to true if you would like to see extra information on logsfalse
schedulerNameName of the scheduler (other than default) to dispatch pods""
architectureMongoDB(®) architecture (standalone or replicaset)standalone
useStatefulSetSet to true to use a StatefulSet instead of a Deployment (only when architecture=standalone)false
auth.enabledEnable authenticationtrue
auth.rootUserMongoDB(®) root userroot
auth.rootPasswordMongoDB(®) root password""
auth.usernamesList of custom users to be created during the initialization[]
auth.passwordsList of passwords for the custom users set at auth.usernames[]
auth.databasesList of custom databases to be created during the initialization[]
auth.usernameDEPRECATED: use auth.usernames instead""
auth.passwordDEPRECATED: use auth.passwords instead""
auth.databaseDEPRECATED: use auth.databases instead""
auth.replicaSetKeyKey used for authentication in the replicaset (only when architecture=replicaset)""
auth.existingSecretExisting secret with MongoDB(®) credentials (keys: mongodb-passwords, mongodb-root-password, mongodb-metrics-password, mongodb-replica-set-key)""
tls.enabledEnable MongoDB(®) TLS support between nodes in the cluster as well as between mongo clients and nodesfalse
tls.mTLS.enabledIF TLS support is enabled, require clients to provide certificatestrue
tls.autoGeneratedGenerate a custom CA and self-signed certificatestrue
tls.existingSecretExisting secret with TLS certificates (keys: mongodb-ca-cert, mongodb-ca-key)""
tls.caCertCustom CA certificated (base64 encoded)""
tls.caKeyCA certificate private key (base64 encoded)""
tls.pemChainIncludedFlag to denote that the Certificate Authority (CA) certificates are bundled with the endpoint cert.false
tls.standalone.existingSecretExisting secret with TLS certificates (tls.key, tls.crt, ca.crt) or (tls.key, tls.crt) with tls.pemChainIncluded set as enabled.""
tls.replicaset.existingSecretsArray of existing secrets with TLS certificates (tls.key, tls.crt, ca.crt) or (tls.key, tls.crt) with tls.pemChainIncluded set as enabled.[]
tls.hidden.existingSecretsArray of existing secrets with TLS certificates (tls.key, tls.crt, ca.crt) or (tls.key, tls.crt) with tls.pemChainIncluded set as enabled.[]
tls.arbiter.existingSecretExisting secret with TLS certificates (tls.key, tls.crt, ca.crt) or (tls.key, tls.crt) with tls.pemChainIncluded set as enabled.""
tls.image.registryInit container TLS certs setup image registryREGISTRY_NAME
tls.image.repositoryInit container TLS certs setup image repositoryREPOSITORY_NAME/nginx
tls.image.digestInit container TLS certs setup image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
tls.image.pullPolicyInit container TLS certs setup image pull policyIfNotPresent
tls.image.pullSecretsInit container TLS certs specify docker-registry secret names as an array[]
tls.extraDnsNamesAdd extra dns names to the CA, can solve x509 auth issue for pod clients[]
tls.modeAllows to set the tls mode which should be used when tls is enabled (options: allowTLS, preferTLS, requireTLS)requireTLS
tls.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if tls.resources is set (tls.resources is recommended for production).none
tls.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
tls.securityContextInit container generate-tls-cert Security context{}
automountServiceAccountTokenMount Service Account token in podfalse
hostAliasesAdd deployment host aliases[]
replicaSetNameName of the replica set (only when architecture=replicaset)rs0
replicaSetHostnamesEnable DNS hostnames in the replicaset config (only when architecture=replicaset)true
enableIPv6Switch to enable/disable IPv6 on MongoDB(®)false
directoryPerDBSwitch to enable/disable DirectoryPerDB on MongoDB(®)false
systemLogVerbosityMongoDB(®) system log verbosity level0
disableSystemLogSwitch to enable/disable MongoDB(®) system logfalse
disableJavascriptSwitch to enable/disable MongoDB(®) server-side JavaScript executionfalse
enableJournalSwitch to enable/disable MongoDB(®) Journalingtrue
configurationMongoDB(®) configuration file to be used for Primary and Secondary nodes""

replicaSetConfigurationSettings settings applied during runtime (not via configuration file)

NameDescriptionValue
replicaSetConfigurationSettings.enabledEnable MongoDB(®) Switch to enable/disable configuring MongoDB(®) run time rs.conf settingsfalse
replicaSetConfigurationSettings.configurationrun-time rs.conf settings{}
existingConfigmapName of existing ConfigMap with MongoDB(®) configuration for Primary and Secondary nodes""
initdbScriptsDictionary of initdb scripts{}
initdbScriptsConfigMapExisting ConfigMap with custom initdb scripts""
commandOverride default container command (useful when using custom images)[]
argsOverride default container args (useful when using custom images)[]
extraFlagsMongoDB(®) additional command line flags[]
extraEnvVarsExtra environment variables to add to MongoDB(®) pods[]
extraEnvVarsCMName of existing ConfigMap containing extra env vars""
extraEnvVarsSecretName of existing Secret containing extra env vars (in case of sensitive data)""

MongoDB(®) statefulset parameters

NameDescriptionValue
annotationsAdditional labels to be added to the MongoDB(®) statefulset. Evaluated as a template{}
labelsAnnotations to be added to the MongoDB(®) statefulset. Evaluated as a template{}
replicaCountNumber of MongoDB(®) nodes2
updateStrategy.typeStrategy to use to replace existing MongoDB(®) pods. When architecture=standalone and useStatefulSet=false,RollingUpdate
podManagementPolicyPod management policy for MongoDB(®)OrderedReady
podAffinityPresetMongoDB(®) Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard""
podAntiAffinityPresetMongoDB(®) Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hardsoft
nodeAffinityPreset.typeMongoDB(®) Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard""
nodeAffinityPreset.keyMongoDB(®) Node label key to match Ignored if affinity is set.""
nodeAffinityPreset.valuesMongoDB(®) Node label values to match. Ignored if affinity is set.[]
affinityMongoDB(®) Affinity for pod assignment{}
nodeSelectorMongoDB(®) Node labels for pod assignment{}
tolerationsMongoDB(®) Tolerations for pod assignment[]
topologySpreadConstraintsMongoDB(®) Spread Constraints for Pods[]
lifecycleHooksLifecycleHook for the MongoDB(®) container(s) to automate configuration before or after startup{}
terminationGracePeriodSecondsMongoDB(®) Termination Grace Period""
podLabelsMongoDB(®) pod labels{}
podAnnotationsMongoDB(®) Pod annotations{}
priorityClassNameName of the existing priority class to be used by MongoDB(®) pod(s)""
runtimeClassNameName of the runtime class to be used by MongoDB(®) pod(s)""
podSecurityContext.enabledEnable MongoDB(®) pod(s)' Security Contexttrue
podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
podSecurityContext.supplementalGroupsSet filesystem extra groups[]
podSecurityContext.fsGroupGroup ID for the volumes of the MongoDB(®) pod(s)1001
podSecurityContext.sysctlssysctl settings of the MongoDB(®) pod(s)'[]
containerSecurityContext.enabledEnabled containers' Security Contexttrue
containerSecurityContext.seLinuxOptionsSet SELinux options in containernil
containerSecurityContext.runAsUserSet containers' Security Context runAsUser1001
containerSecurityContext.runAsGroupSet containers' Security Context runAsGroup0
containerSecurityContext.runAsNonRootSet container's Security Context runAsNonRoottrue
containerSecurityContext.privilegedSet container's Security Context privilegedfalse
containerSecurityContext.readOnlyRootFilesystemSet container's Security Context readOnlyRootFilesystemfalse
containerSecurityContext.allowPrivilegeEscalationSet container's Security Context allowPrivilegeEscalationfalse
containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
containerSecurityContext.seccompProfile.typeSet container's Security Context seccomp profileRuntimeDefault
resourcesPresetSet container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).none
resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
containerPorts.mongodbMongoDB(®) container port27017
livenessProbe.enabledEnable livenessProbetrue
livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe30
livenessProbe.periodSecondsPeriod seconds for livenessProbe20
livenessProbe.timeoutSecondsTimeout seconds for livenessProbe10
livenessProbe.failureThresholdFailure threshold for livenessProbe6
livenessProbe.successThresholdSuccess threshold for livenessProbe1
readinessProbe.enabledEnable readinessProbetrue
readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe5
readinessProbe.periodSecondsPeriod seconds for readinessProbe10
readinessProbe.timeoutSecondsTimeout seconds for readinessProbe5
readinessProbe.failureThresholdFailure threshold for readinessProbe6
readinessProbe.successThresholdSuccess threshold for readinessProbe1
startupProbe.enabledEnable startupProbefalse
startupProbe.initialDelaySecondsInitial delay seconds for startupProbe5
startupProbe.periodSecondsPeriod seconds for startupProbe20
startupProbe.timeoutSecondsTimeout seconds for startupProbe10
startupProbe.failureThresholdFailure threshold for startupProbe30
startupProbe.successThresholdSuccess threshold for startupProbe1
customLivenessProbeOverride default liveness probe for MongoDB(®) containers{}
customReadinessProbeOverride default readiness probe for MongoDB(®) containers{}
customStartupProbeOverride default startup probe for MongoDB(®) containers{}
initContainersAdd additional init containers for the hidden node pod(s)[]
sidecarsAdd additional sidecar containers for the MongoDB(®) pod(s)[]
extraVolumeMountsOptionally specify extra list of additional volumeMounts for the MongoDB(®) container(s)[]
extraVolumesOptionally specify extra list of additional volumes to the MongoDB(®) statefulset[]
pdb.createEnable/disable a Pod Disruption Budget creation for MongoDB(®) pod(s)false
pdb.minAvailableMinimum number/percentage of MongoDB(®) pods that must still be available after the eviction1
pdb.maxUnavailableMaximum number/percentage of MongoDB(®) pods that may be made unavailable after the eviction""

Traffic exposure parameters

NameDescriptionValue
service.nameOverrideMongoDB(®) service name""
service.typeKubernetes Service type (only for standalone architecture)ClusterIP
service.portNameMongoDB(®) service port name (only for standalone architecture)mongodb
service.ports.mongodbMongoDB(®) service port.27017
service.nodePorts.mongodbPort to bind to for NodePort and LoadBalancer service types (only for standalone architecture)""
service.clusterIPMongoDB(®) service cluster IP (only for standalone architecture)""
service.externalIPsSpecify the externalIP value ClusterIP service type (only for standalone architecture)[]
service.loadBalancerIPloadBalancerIP for MongoDB(®) Service (only for standalone architecture)""
service.loadBalancerClassloadBalancerClass for MongoDB(®) Service (only for standalone architecture)""
service.loadBalancerSourceRangesAddress(es) that are allowed when service is LoadBalancer (only for standalone architecture)[]
service.allocateLoadBalancerNodePortsWheter to allocate node ports when service type is LoadBalancertrue
service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
service.annotationsProvide any additional annotations that may be required{}
service.externalTrafficPolicyservice external traffic policy (only for standalone architecture)Local
service.sessionAffinityControl where client requests go, to the same pod or round-robinNone
service.sessionAffinityConfigAdditional settings for the sessionAffinity{}
service.headless.annotationsAnnotations for the headless service.{}
externalAccess.enabledEnable Kubernetes external cluster access to MongoDB(®) nodes (only for replicaset architecture)false
externalAccess.autoDiscovery.enabledEnable using an init container to auto-detect external IPs by querying the K8s APIfalse
externalAccess.autoDiscovery.image.registryInit container auto-discovery image registryREGISTRY_NAME
externalAccess.autoDiscovery.image.repositoryInit container auto-discovery image repositoryREPOSITORY_NAME/kubectl
externalAccess.autoDiscovery.image.digestInit container auto-discovery image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
externalAccess.autoDiscovery.image.pullPolicyInit container auto-discovery image pull policyIfNotPresent
externalAccess.autoDiscovery.image.pullSecretsInit container auto-discovery image pull secrets[]
externalAccess.autoDiscovery.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if externalAccess.autoDiscovery.resources is set (externalAccess.autoDiscovery.resources is recommended for production).none
externalAccess.autoDiscovery.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
externalAccess.externalMaster.enabledUse external master for bootstrappingfalse
externalAccess.externalMaster.hostExternal master host to bootstrap from""
externalAccess.externalMaster.portPort for MongoDB(®) service external master host27017
externalAccess.service.typeKubernetes Service type for external access. Allowed values: NodePort, LoadBalancer or ClusterIPLoadBalancer
externalAccess.service.portNameMongoDB(®) port name used for external access when service type is LoadBalancermongodb
externalAccess.service.ports.mongodbMongoDB(®) port used for external access when service type is LoadBalancer27017
externalAccess.service.loadBalancerIPsArray of load balancer IPs for MongoDB(®) nodes[]
externalAccess.service.loadBalancerClassloadBalancerClass when service type is LoadBalancer""
externalAccess.service.loadBalancerSourceRangesAddress(es) that are allowed when service is LoadBalancer[]
externalAccess.service.allocateLoadBalancerNodePortsWheter to allocate node ports when service type is LoadBalancertrue
externalAccess.service.externalTrafficPolicyMongoDB(®) service external traffic policyLocal
externalAccess.service.nodePortsArray of node ports used to configure MongoDB(®) advertised hostname when service type is NodePort[]
externalAccess.service.domainDomain or external IP used to configure MongoDB(®) advertised hostname when service type is NodePort""
externalAccess.service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
externalAccess.service.annotationsService annotations for external access{}
externalAccess.service.sessionAffinityControl where client requests go, to the same pod or round-robinNone
externalAccess.service.sessionAffinityConfigAdditional settings for the sessionAffinity{}
externalAccess.hidden.enabledEnable Kubernetes external cluster access to MongoDB(®) hidden nodesfalse
externalAccess.hidden.service.typeKubernetes Service type for external access. Allowed values: NodePort or LoadBalancerLoadBalancer
externalAccess.hidden.service.portNameMongoDB(®) port name used for external access when service type is LoadBalancermongodb
externalAccess.hidden.service.ports.mongodbMongoDB(®) port used for external access when service type is LoadBalancer27017
externalAccess.hidden.service.loadBalancerIPsArray of load balancer IPs for MongoDB(®) nodes[]
externalAccess.hidden.service.loadBalancerClassloadBalancerClass when service type is LoadBalancer""
externalAccess.hidden.service.loadBalancerSourceRangesAddress(es) that are allowed when service is LoadBalancer[]
externalAccess.hidden.service.allocateLoadBalancerNodePortsWheter to allocate node ports when service type is LoadBalancertrue
externalAccess.hidden.service.externalTrafficPolicyMongoDB(®) service external traffic policyLocal
externalAccess.hidden.service.nodePortsArray of node ports used to configure MongoDB(®) advertised hostname when service type is NodePort. Length must be the same as replicaCount[]
externalAccess.hidden.service.domainDomain or external IP used to configure MongoDB(®) advertised hostname when service type is NodePort""
externalAccess.hidden.service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
externalAccess.hidden.service.annotationsService annotations for external access{}
externalAccess.hidden.service.sessionAffinityControl where client requests go, to the same pod or round-robinNone
externalAccess.hidden.service.sessionAffinityConfigAdditional settings for the sessionAffinity{}

Network policy parameters

NameDescriptionValue
networkPolicy.enabledSpecifies whether a NetworkPolicy should be createdtrue
networkPolicy.allowExternalDon't require server label for connectionstrue
networkPolicy.allowExternalEgressAllow the pod to access any range of port and all destinations.true
networkPolicy.extraIngressAdd extra ingress rules to the NetworkPolice[]
networkPolicy.extraEgressAdd extra ingress rules to the NetworkPolicy[]
networkPolicy.ingressNSMatchLabelsLabels to match to allow traffic from other namespaces{}
networkPolicy.ingressNSPodMatchLabelsPod labels to match to allow traffic from other namespaces{}
persistence.enabledEnable MongoDB(®) data persistence using PVCtrue
persistence.nameName of the PVC and mounted volumedatadir
persistence.mediumProvide a medium for emptyDir volumes.""
persistence.existingClaimProvide an existing PersistentVolumeClaim (only when architecture=standalone)""
persistence.resourcePolicySetting it to "keep" to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted""
persistence.storageClassPVC Storage Class for MongoDB(®) data volume""
persistence.accessModesPV Access Mode["ReadWriteOnce"]
persistence.sizePVC Storage Request for MongoDB(®) data volume8Gi
persistence.annotationsPVC annotations{}
persistence.mountPathPath to mount the volume at/bitnami/mongodb
persistence.subPathSubdirectory of the volume to mount at""
persistence.volumeClaimTemplates.selectorA label query over volumes to consider for binding (e.g. when using local volumes){}
persistence.volumeClaimTemplates.requestsCustom PVC requests attributes{}
persistence.volumeClaimTemplates.dataSourceAdd dataSource to the VolumeClaimTemplate{}
persistentVolumeClaimRetentionPolicy.enabledEnable Persistent volume retention policy for MongoDB(®) Statefulsetfalse
persistentVolumeClaimRetentionPolicy.whenScaledVolume retention behavior when the replica count of the StatefulSet is reducedRetain
persistentVolumeClaimRetentionPolicy.whenDeletedVolume retention behavior that applies when the StatefulSet is deletedRetain

Backup parameters

NameDescriptionValue
backup.enabledEnable the logical dump of the database "regularly"false
backup.cronjob.scheduleSet the cronjob parameter schedule@daily
backup.cronjob.concurrencyPolicySet the cronjob parameter concurrencyPolicyAllow
backup.cronjob.failedJobsHistoryLimitSet the cronjob parameter failedJobsHistoryLimit1
backup.cronjob.successfulJobsHistoryLimitSet the cronjob parameter successfulJobsHistoryLimit3
backup.cronjob.startingDeadlineSecondsSet the cronjob parameter startingDeadlineSeconds""
backup.cronjob.ttlSecondsAfterFinishedSet the cronjob parameter ttlSecondsAfterFinished""
backup.cronjob.restartPolicySet the cronjob parameter restartPolicyOnFailure
backup.cronjob.containerSecurityContext.enabledEnabled containers' Security Contexttrue
backup.cronjob.containerSecurityContext.seLinuxOptionsSet SELinux options in containernil
backup.cronjob.containerSecurityContext.runAsUserSet containers' Security Context runAsUser1001
backup.cronjob.containerSecurityContext.runAsGroupSet containers' Security Context runAsGroup0
backup.cronjob.containerSecurityContext.runAsNonRootSet container's Security Context runAsNonRoottrue
backup.cronjob.containerSecurityContext.privilegedSet container's Security Context privilegedfalse
backup.cronjob.containerSecurityContext.readOnlyRootFilesystemSet container's Security Context readOnlyRootFilesystemfalse
backup.cronjob.containerSecurityContext.allowPrivilegeEscalationSet container's Security Context allowPrivilegeEscalationfalse
backup.cronjob.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
backup.cronjob.containerSecurityContext.seccompProfile.typeSet container's Security Context seccomp profileRuntimeDefault
backup.cronjob.commandSet backup container's command to run[]
backup.cronjob.labelsSet the cronjob labels{}
backup.cronjob.annotationsSet the cronjob annotations{}
backup.cronjob.storage.existingClaimProvide an existing PersistentVolumeClaim (only when architecture=standalone)""
backup.cronjob.storage.resourcePolicySetting it to "keep" to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted""
backup.cronjob.storage.storageClassPVC Storage Class for the backup data volume""
backup.cronjob.storage.accessModesPV Access Mode["ReadWriteOnce"]
backup.cronjob.storage.sizePVC Storage Request for the backup data volume8Gi
backup.cronjob.storage.annotationsPVC annotations{}
backup.cronjob.storage.mountPathPath to mount the volume at/backup/mongodb
backup.cronjob.storage.subPathSubdirectory of the volume to mount at""
backup.cronjob.storage.volumeClaimTemplates.selectorA label query over volumes to consider for binding (e.g. when using local volumes){}

RBAC parameters

NameDescriptionValue
serviceAccount.createEnable creation of ServiceAccount for MongoDB(®) podstrue
serviceAccount.nameName of the created serviceAccount""
serviceAccount.annotationsAdditional Service Account annotations{}
serviceAccount.automountServiceAccountTokenAllows auto mount of ServiceAccountToken on the serviceAccount createdfalse
rbac.createWhether to create & use RBAC resources or notfalse
rbac.rulesCustom rules to create following the role specification[]
podSecurityPolicy.createWhether to create a PodSecurityPolicy. WARNING: PodSecurityPolicy is deprecated in Kubernetes v1.21 or later, unavailable in v1.25 or laterfalse
podSecurityPolicy.allowPrivilegeEscalationEnable privilege escalationfalse
podSecurityPolicy.privilegedAllow privilegedfalse
podSecurityPolicy.specSpecify the full spec to use for Pod Security Policy{}

Volume Permissions parameters

NameDescriptionValue
volumePermissions.enabledEnable init container that changes the owner and group of the persistent volume(s) mountpoint to runAsUser:fsGroupfalse
volumePermissions.image.registryInit container volume-permissions image registryREGISTRY_NAME
volumePermissions.image.repositoryInit container volume-permissions image repositoryREPOSITORY_NAME/os-shell
volumePermissions.image.digestInit container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
volumePermissions.image.pullPolicyInit container volume-permissions image pull policyIfNotPresent
volumePermissions.image.pullSecretsSpecify docker-registry secret names as an array[]
volumePermissions.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production).none
volumePermissions.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
volumePermissions.securityContext.seLinuxOptionsSet SELinux options in containernil
volumePermissions.securityContext.runAsUserUser ID for the volumePermissions container0

Arbiter parameters

NameDescriptionValue
arbiter.enabledEnable deploying the arbitertrue
arbiter.automountServiceAccountTokenMount Service Account token in podfalse
arbiter.hostAliasesAdd deployment host aliases[]
arbiter.configurationArbiter configuration file to be used""
arbiter.existingConfigmapName of existing ConfigMap with Arbiter configuration""
arbiter.commandOverride default container command (useful when using custom images)[]
arbiter.argsOverride default container args (useful when using custom images)[]
arbiter.extraFlagsArbiter additional command line flags[]
arbiter.extraEnvVarsExtra environment variables to add to Arbiter pods[]
arbiter.extraEnvVarsCMName of existing ConfigMap containing extra env vars""
arbiter.extraEnvVarsSecretName of existing Secret containing extra env vars (in case of sensitive data)""
arbiter.annotationsAdditional labels to be added to the Arbiter statefulset{}
arbiter.labelsAnnotations to be added to the Arbiter statefulset{}
arbiter.topologySpreadConstraintsMongoDB(®) Spread Constraints for arbiter Pods[]
arbiter.lifecycleHooksLifecycleHook for the Arbiter container to automate configuration before or after startup{}
arbiter.terminationGracePeriodSecondsArbiter Termination Grace Period""
arbiter.updateStrategy.typeStrategy that will be employed to update Pods in the StatefulSetRollingUpdate
arbiter.podManagementPolicyPod management policy for MongoDB(®)OrderedReady
arbiter.schedulerNameName of the scheduler (other than default) to dispatch pods""
arbiter.podAffinityPresetArbiter Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard""
arbiter.podAntiAffinityPresetArbiter Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hardsoft
arbiter.nodeAffinityPreset.typeArbiter Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard""
arbiter.nodeAffinityPreset.keyArbiter Node label key to match Ignored if affinity is set.""
arbiter.nodeAffinityPreset.valuesArbiter Node label values to match. Ignored if affinity is set.[]
arbiter.affinityArbiter Affinity for pod assignment{}
arbiter.nodeSelectorArbiter Node labels for pod assignment{}
arbiter.tolerationsArbiter Tolerations for pod assignment[]
arbiter.podLabelsArbiter pod labels{}
arbiter.podAnnotationsArbiter Pod annotations{}
arbiter.priorityClassNameName of the existing priority class to be used by Arbiter pod(s)""
arbiter.runtimeClassNameName of the runtime class to be used by Arbiter pod(s)""
arbiter.podSecurityContext.enabledEnable Arbiter pod(s)' Security Contexttrue
arbiter.podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
arbiter.podSecurityContext.supplementalGroupsSet filesystem extra groups[]
arbiter.podSecurityContext.fsGroupGroup ID for the volumes of the Arbiter pod(s)1001
arbiter.podSecurityContext.sysctlssysctl settings of the Arbiter pod(s)'[]
arbiter.containerSecurityContext.enabledEnabled containers' Security Contexttrue
arbiter.containerSecurityContext.seLinuxOptionsSet SELinux options in containernil
arbiter.containerSecurityContext.runAsUserSet containers' Security Context runAsUser1001
arbiter.containerSecurityContext.runAsGroupSet containers' Security Context runAsGroup0
arbiter.containerSecurityContext.runAsNonRootSet container's Security Context runAsNonRoottrue
arbiter.containerSecurityContext.privilegedSet container's Security Context privilegedfalse
arbiter.containerSecurityContext.readOnlyRootFilesystemSet container's Security Context readOnlyRootFilesystemfalse
arbiter.containerSecurityContext.allowPrivilegeEscalationSet container's Security Context allowPrivilegeEscalationfalse
arbiter.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
arbiter.containerSecurityContext.seccompProfile.typeSet container's Security Context seccomp profileRuntimeDefault
arbiter.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if arbiter.resources is set (arbiter.resources is recommended for production).none
arbiter.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
arbiter.containerPorts.mongodbMongoDB(®) arbiter container port27017
arbiter.livenessProbe.enabledEnable livenessProbetrue
arbiter.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe30
arbiter.livenessProbe.periodSecondsPeriod seconds for livenessProbe20
arbiter.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe10
arbiter.livenessProbe.failureThresholdFailure threshold for livenessProbe6
arbiter.livenessProbe.successThresholdSuccess threshold for livenessProbe1
arbiter.readinessProbe.enabledEnable readinessProbetrue
arbiter.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe5
arbiter.readinessProbe.periodSecondsPeriod seconds for readinessProbe20
arbiter.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe10
arbiter.readinessProbe.failureThresholdFailure threshold for readinessProbe6
arbiter.readinessProbe.successThresholdSuccess threshold for readinessProbe1
arbiter.startupProbe.enabledEnable startupProbefalse
arbiter.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe5
arbiter.startupProbe.periodSecondsPeriod seconds for startupProbe10
arbiter.startupProbe.timeoutSecondsTimeout seconds for startupProbe5
arbiter.startupProbe.failureThresholdFailure threshold for startupProbe30
arbiter.startupProbe.successThresholdSuccess threshold for startupProbe1
arbiter.customLivenessProbeOverride default liveness probe for Arbiter containers{}
arbiter.customReadinessProbeOverride default readiness probe for Arbiter containers{}
arbiter.customStartupProbeOverride default startup probe for Arbiter containers{}
arbiter.initContainersAdd additional init containers for the Arbiter pod(s)[]
arbiter.sidecarsAdd additional sidecar containers for the Arbiter pod(s)[]
arbiter.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the Arbiter container(s)[]
arbiter.extraVolumesOptionally specify extra list of additional volumes to the Arbiter statefulset[]
arbiter.pdb.createEnable/disable a Pod Disruption Budget creation for Arbiter pod(s)false
arbiter.pdb.minAvailableMinimum number/percentage of Arbiter pods that should remain scheduled1
arbiter.pdb.maxUnavailableMaximum number/percentage of Arbiter pods that may be made unavailable""
arbiter.service.nameOverrideThe arbiter service name""
arbiter.service.ports.mongodbMongoDB(®) service port27017
arbiter.service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
arbiter.service.annotationsProvide any additional annotations that may be required{}
arbiter.service.headless.annotationsAnnotations for the headless service.{}

Hidden Node parameters

NameDescriptionValue
hidden.enabledEnable deploying the hidden nodesfalse
hidden.automountServiceAccountTokenMount Service Account token in podfalse
hidden.hostAliasesAdd deployment host aliases[]
hidden.configurationHidden node configuration file to be used""
hidden.existingConfigmapName of existing ConfigMap with Hidden node configuration""
hidden.commandOverride default container command (useful when using custom images)[]
hidden.argsOverride default container args (useful when using custom images)[]
hidden.extraFlagsHidden node additional command line flags[]
hidden.extraEnvVarsExtra environment variables to add to Hidden node pods[]
hidden.extraEnvVarsCMName of existing ConfigMap containing extra env vars""
hidden.extraEnvVarsSecretName of existing Secret containing extra env vars (in case of sensitive data)""
hidden.annotationsAdditional labels to be added to thehidden node statefulset{}
hidden.labelsAnnotations to be added to the hidden node statefulset{}
hidden.topologySpreadConstraintsMongoDB(®) Spread Constraints for hidden Pods[]
hidden.lifecycleHooksLifecycleHook for the Hidden container to automate configuration before or after startup{}
hidden.replicaCountNumber of hidden nodes (only when architecture=replicaset)1
hidden.terminationGracePeriodSecondsHidden Termination Grace Period""
hidden.updateStrategy.typeStrategy that will be employed to update Pods in the StatefulSetRollingUpdate
hidden.podManagementPolicyPod management policy for hidden nodeOrderedReady
hidden.schedulerNameName of the scheduler (other than default) to dispatch pods""
hidden.podAffinityPresetHidden node Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard""
hidden.podAntiAffinityPresetHidden node Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hardsoft
hidden.nodeAffinityPreset.typeHidden Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard""
hidden.nodeAffinityPreset.keyHidden Node label key to match Ignored if affinity is set.""
hidden.nodeAffinityPreset.valuesHidden Node label values to match. Ignored if affinity is set.[]
hidden.affinityHidden node Affinity for pod assignment{}
hidden.nodeSelectorHidden node Node labels for pod assignment{}
hidden.tolerationsHidden node Tolerations for pod assignment[]
hidden.podLabelsHidden node pod labels{}
hidden.podAnnotationsHidden node Pod annotations{}
hidden.priorityClassNameName of the existing priority class to be used by hidden node pod(s)""
hidden.runtimeClassNameName of the runtime class to be used by hidden node pod(s)""
hidden.podSecurityContext.enabledEnable Hidden pod(s)' Security Contexttrue
hidden.podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
hidden.podSecurityContext.supplementalGroupsSet filesystem extra groups[]
hidden.podSecurityContext.fsGroupGroup ID for the volumes of the Hidden pod(s)1001
hidden.podSecurityContext.sysctlssysctl settings of the Hidden pod(s)'[]
hidden.containerSecurityContext.enabledEnabled containers' Security Contexttrue
hidden.containerSecurityContext.seLinuxOptionsSet SELinux options in containernil
hidden.containerSecurityContext.runAsUserSet containers' Security Context runAsUser1001
hidden.containerSecurityContext.runAsGroupSet containers' Security Context runAsGroup0
hidden.containerSecurityContext.runAsNonRootSet container's Security Context runAsNonRoottrue
hidden.containerSecurityContext.privilegedSet container's Security Context privilegedfalse
hidden.containerSecurityContext.readOnlyRootFilesystemSet container's Security Context readOnlyRootFilesystemfalse
hidden.containerSecurityContext.allowPrivilegeEscalationSet container's Security Context allowPrivilegeEscalationfalse
hidden.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
hidden.containerSecurityContext.seccompProfile.typeSet container's Security Context seccomp profileRuntimeDefault
hidden.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if hidden.resources is set (hidden.resources is recommended for production).none
hidden.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
hidden.containerPorts.mongodbMongoDB(®) hidden container port27017
hidden.livenessProbe.enabledEnable livenessProbetrue
hidden.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe30
hidden.livenessProbe.periodSecondsPeriod seconds for livenessProbe20
hidden.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe10
hidden.livenessProbe.failureThresholdFailure threshold for livenessProbe6
hidden.livenessProbe.successThresholdSuccess threshold for livenessProbe1
hidden.readinessProbe.enabledEnable readinessProbetrue
hidden.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe5
hidden.readinessProbe.periodSecondsPeriod seconds for readinessProbe20
hidden.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe10
hidden.readinessProbe.failureThresholdFailure threshold for readinessProbe6
hidden.readinessProbe.successThresholdSuccess threshold for readinessProbe1
hidden.startupProbe.enabledEnable startupProbefalse
hidden.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe5
hidden.startupProbe.periodSecondsPeriod seconds for startupProbe10
hidden.startupProbe.timeoutSecondsTimeout seconds for startupProbe5
hidden.startupProbe.failureThresholdFailure threshold for startupProbe30
hidden.startupProbe.successThresholdSuccess threshold for startupProbe1
hidden.customLivenessProbeOverride default liveness probe for hidden node containers{}
hidden.customReadinessProbeOverride default readiness probe for hidden node containers{}
hidden.customStartupProbeOverride default startup probe for MongoDB(®) containers{}
hidden.initContainersAdd init containers to the MongoDB(®) Hidden pods.[]
hidden.sidecarsAdd additional sidecar containers for the hidden node pod(s)[]
hidden.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the hidden node container(s)[]
hidden.extraVolumesOptionally specify extra list of additional volumes to the hidden node statefulset[]
hidden.pdb.createEnable/disable a Pod Disruption Budget creation for hidden node pod(s)false
hidden.pdb.minAvailableMinimum number/percentage of hidden node pods that should remain scheduled1
hidden.pdb.maxUnavailableMaximum number/percentage of hidden node pods that may be made unavailable""
hidden.persistence.enabledEnable hidden node data persistence using PVCtrue
hidden.persistence.mediumProvide a medium for emptyDir volumes.""
hidden.persistence.storageClassPVC Storage Class for hidden node data volume""
hidden.persistence.accessModesPV Access Mode["ReadWriteOnce"]
hidden.persistence.sizePVC Storage Request for hidden node data volume8Gi
hidden.persistence.annotationsPVC annotations{}
hidden.persistence.mountPathThe path the volume will be mounted at, useful when using different MongoDB(®) images./bitnami/mongodb
hidden.persistence.subPathThe subdirectory of the volume to mount to, useful in dev environments""
hidden.persistence.volumeClaimTemplates.selectorA label query over volumes to consider for binding (e.g. when using local volumes){}
hidden.persistence.volumeClaimTemplates.requestsCustom PVC requests attributes{}
hidden.persistence.volumeClaimTemplates.dataSourceSet volumeClaimTemplate dataSource{}
hidden.service.portNameMongoDB(®) service port namemongodb
hidden.service.ports.mongodbMongoDB(®) service port27017
hidden.service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
hidden.service.annotationsProvide any additional annotations that may be required{}
hidden.service.headless.annotationsAnnotations for the headless service.{}

Metrics parameters

NameDescriptionValue
metrics.enabledEnable using a sidecar Prometheus exporterfalse
metrics.image.registryMongoDB(®) Prometheus exporter image registryREGISTRY_NAME
metrics.image.repositoryMongoDB(®) Prometheus exporter image repositoryREPOSITORY_NAME/mongodb-exporter
metrics.image.digestMongoDB(®) image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
metrics.image.pullPolicyMongoDB(®) Prometheus exporter image pull policyIfNotPresent
metrics.image.pullSecretsSpecify docker-registry secret names as an array[]
metrics.usernameString with username for the metrics exporter""
metrics.passwordString with password for the metrics exporter""
metrics.compatibleModeEnables old style mongodb-exporter metricstrue
metrics.collector.allEnable all collectors. Same as enabling all individual metricsfalse
metrics.collector.diagnosticdataBoolean Enable collecting metrics from getDiagnosticDatatrue
metrics.collector.replicasetstatusBoolean Enable collecting metrics from replSetGetStatustrue
metrics.collector.dbstatsBoolean Enable collecting metrics from dbStatsfalse
metrics.collector.topmetricsBoolean Enable collecting metrics from top admin commandfalse
metrics.collector.indexstatsBoolean Enable collecting metrics from $indexStatsfalse
metrics.collector.collstatsBoolean Enable collecting metrics from $collStatsfalse
metrics.collector.collstatsCollsList of <databases>.<collections> to get $collStats[]
metrics.collector.indexstatsCollsList - List of <databases>.<collections> to get $indexStats[]
metrics.collector.collstatsLimitNumber - Disable collstats, dbstats, topmetrics and indexstats collector if there are more than <n> collections. 0=No limit0
metrics.extraFlagsString with extra flags to the metrics exporter""
metrics.commandOverride default container command (useful when using custom images)[]
metrics.argsOverride default container args (useful when using custom images)[]
metrics.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if metrics.resources is set (metrics.resources is recommended for production).none
metrics.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
metrics.containerPortPort of the Prometheus metrics container9216
metrics.service.annotationsAnnotations for Prometheus Exporter pods. Evaluated as a template.{}
metrics.service.typeType of the Prometheus metrics serviceClusterIP
metrics.service.ports.metricsPort of the Prometheus metrics service9216
metrics.service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
metrics.livenessProbe.enabledEnable livenessProbetrue
metrics.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe15
metrics.livenessProbe.periodSecondsPeriod seconds for livenessProbe5
metrics.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe10
metrics.livenessProbe.failureThresholdFailure threshold for livenessProbe3
metrics.livenessProbe.successThresholdSuccess threshold for livenessProbe1
metrics.readinessProbe.enabledEnable readinessProbetrue
metrics.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe5
metrics.readinessProbe.periodSecondsPeriod seconds for readinessProbe5
metrics.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe10
metrics.readinessProbe.failureThresholdFailure threshold for readinessProbe3
metrics.readinessProbe.successThresholdSuccess threshold for readinessProbe1
metrics.startupProbe.enabledEnable startupProbefalse
metrics.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe5
metrics.startupProbe.periodSecondsPeriod seconds for startupProbe10
metrics.startupProbe.timeoutSecondsTimeout seconds for startupProbe5
metrics.startupProbe.failureThresholdFailure threshold for startupProbe30
metrics.startupProbe.successThresholdSuccess threshold for startupProbe1
metrics.customLivenessProbeOverride default liveness probe for MongoDB(®) containers{}
metrics.customReadinessProbeOverride default readiness probe for MongoDB(®) containers{}
metrics.customStartupProbeOverride default startup probe for MongoDB(®) containers{}
metrics.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the metrics container(s)[]
metrics.serviceMonitor.enabledCreate ServiceMonitor Resource for scraping metrics using Prometheus Operatorfalse
metrics.serviceMonitor.namespaceNamespace which Prometheus is running in""
metrics.serviceMonitor.intervalInterval at which metrics should be scraped30s
metrics.serviceMonitor.scrapeTimeoutSpecify the timeout after which the scrape is ended""
metrics.serviceMonitor.relabelingsRelabelConfigs to apply to samples before scraping.[]
metrics.serviceMonitor.metricRelabelingsMetricsRelabelConfigs to apply to samples before ingestion.[]
metrics.serviceMonitor.labelsUsed to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with{}
metrics.serviceMonitor.selectorPrometheus instance selector labels{}
metrics.serviceMonitor.honorLabelsSpecify honorLabels parameter to add the scrape endpointfalse
metrics.serviceMonitor.jobLabelThe name of the label on the target service to use as the job name in prometheus.""
metrics.prometheusRule.enabledSet this to true to create prometheusRules for Prometheus operatorfalse
metrics.prometheusRule.additionalLabelsAdditional labels that can be used so prometheusRules will be discovered by Prometheus{}
metrics.prometheusRule.namespaceNamespace where prometheusRules resource should be created""
metrics.prometheusRule.rulesRules to be created, check values for an example[]

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

helm install my-release \
    --set auth.rootPassword=secretpassword,auth.username=my-user,auth.password=my-password,auth.database=my-database \
    oci://REGISTRY_NAME/REPOSITORY_NAME/mongodb

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The above command sets the MongoDB(®) root account password to secretpassword. Additionally, it creates a standard database user named my-user, with the password my-password, who has access to a database named my-database.

NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/mongodb

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Tip: You can use the default values.yaml

Configuration and installation details

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcePreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Rolling vs Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Customize a new MongoDB instance

The Bitnami MongoDB(®) image supports the use of custom scripts to initialize a fresh instance. In order to execute the scripts, two options are available:

  • Specify them using the initdbScripts parameter as dict.
  • Define an external Kubernetes ConfigMap with all the initialization scripts by setting the initdbScriptsConfigMap parameter. Note that this will override the previous option.

The allowed script extensions are .sh and .js.

Replicaset: Access MongoDB(®) nodes from outside the cluster

In order to access MongoDB(®) nodes from outside the cluster when using a replicaset architecture, a specific service per MongoDB(®) pod will be created. There are two ways of configuring external access:

  • Using LoadBalancer services
  • Using NodePort services.

Use LoadBalancer services

Two alternatives are available to use LoadBalancer services:

  • Use random load balancer IP addresses using an initContainer that waits for the IP addresses to be ready and discovers them automatically. An example deployment configuration is shown below:

    architecture=replicaset
    replicaCount=2
    externalAccess.enabled=true
    externalAccess.service.type=LoadBalancer
    externalAccess.service.port=27017
    externalAccess.autoDiscovery.enabled=true
    serviceAccount.create=true
    rbac.create=true
    

    NOTE: This option requires creating RBAC rules on clusters where RBAC policies are enabled.

  • Manually specify the load balancer IP addresses. An example deployment configuration is shown below, with the placeholder EXTERNAL-IP-ADDRESS-X used in place of the load balancer IP addresses:

    architecture=replicaset
    replicaCount=2
    externalAccess.enabled=true
    externalAccess.service.type=LoadBalancer
    externalAccess.service.port=27017
    externalAccess.service.loadBalancerIPs[0]='EXTERNAL-IP-ADDRESS-1'
    externalAccess.service.loadBalancerIPs[1]='EXTERNAL-IP-ADDRESS-2'
    

    NOTE: This option requires knowing the load balancer IP addresses, so that each MongoDB® node's advertised hostname is configured with it.

Use NodePort services

Manually specify the node ports to use. An example deployment configuration is shown below, with the placeholder NODE-PORT-X used in place of the node ports:

architecture=replicaset
replicaCount=2
externalAccess.enabled=true
externalAccess.service.type=NodePort
externalAccess.service.nodePorts[0]='NODE-PORT-1'
externalAccess.service.nodePorts[1]='NODE-PORT-2'

NOTE: This option requires knowing the node ports that will be exposed, so each MongoDB® node's advertised hostname is configured with it.

The pod will try to get the external IP address of the node using the command curl -s https://ipinfo.io/IP-ADDRESS unless the externalAccess.service.domain parameter is set.

Bootstrapping with an External Cluster

This chart is equipped with the ability to bring online a set of Pods that connect to an existing MongoDB(®) deployment that lies outside of Kubernetes. This effectively creates a hybrid MongoDB(®) Deployment where both Pods in Kubernetes and Instances such as Virtual Machines can partake in a single MongoDB(®) Deployment. This is helpful in situations where one may be migrating MongoDB(®) from Virtual Machines into Kubernetes, for example. To take advantage of this, use the following as an example configuration:

externalAccess:
  externalMaster:
    enabled: true
    host: external-mongodb-0.internal

:warning: To bootstrap MongoDB(®) with an external master that lies outside of Kubernetes, be sure to set up external access using any of the suggested methods in this chart to have connectivity between the MongoDB(®) members. :warning:

Add extra environment variables

To add extra environment variables (useful for advanced operations like custom init scripts), use the extraEnvVars property.

extraEnvVars:
  - name: LOG_LEVEL
    value: error

Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the extraEnvVarsCM or the extraEnvVarsSecret properties.

Use Sidecars and Init Containers

If additional containers are needed in the same pod (such as additional metrics or logging exporters), they can be defined using the sidecars config parameter.

sidecars:
- name: your-image-name
  image: your-image
  imagePullPolicy: Always
  ports:
  - name: portname
    containerPort: 1234

If these sidecars export extra ports, extra port definitions can be added using the service.extraPorts parameter (where available), as shown in the example below:

service:
  extraPorts:
  - name: extraPort
    port: 11311
    targetPort: 11311

NOTE: This Helm chart already includes sidecar containers for the Prometheus exporters (where applicable). These can be activated by adding the --enable-metrics=true parameter at deployment time. The sidecars parameter should therefore only be used for any extra sidecar containers.

If additional init containers are needed in the same pod, they can be defined using the initContainers parameter. Here is an example:

initContainers:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
      - name: portname
        containerPort: 1234

Learn more about sidecar containers and init containers.

Persistence

The Bitnami MongoDB(®) image stores the MongoDB(®) data and configurations at the /bitnami/mongodb path of the container.

The chart mounts a Persistent Volume at this location. The volume is created using dynamic volume provisioning.

If you encounter errors when working with persistent volumes, refer to our troubleshooting guide for persistent volumes.

Backup and restore MongoDB(R) deployments

Two different approaches are available to back up and restore Bitnami MongoDB® Helm chart deployments on Kubernetes:

  • Back up the data from the source deployment and restore it in a new deployment using MongoDB® built-in backup/restore tools.
  • Back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool.

Method 1: Backup and restore data using MongoDB® built-in tools

This method involves the following steps:

  • Use the mongodump tool to create a snapshot of the data in the source cluster.
  • Create a new MongoDB® Cluster deployment and forward the MongoDB® Cluster service port for the new deployment.
  • Restore the data using the mongorestore tool to import the backup to the new cluster.

NOTE: Under this approach, it is important to create the new deployment on the destination cluster using the same credentials as the original deployment on the source cluster.

Method 2: Back up and restore persistent data volumes

This method involves copying the persistent data volumes for the MongoDB® nodes and reusing them in a new deployment with Velero, an open source Kubernetes backup/restore tool. This method is only suitable when:

This method involves the following steps:

  • Install Velero on the source and destination clusters.
  • Use Velero to back up the PersistentVolumes (PVs) used by the deployment on the source cluster.
  • Use Velero to restore the backed-up PVs on the destination cluster.
  • Create a new deployment on the destination cluster with the same chart, deployment name, credentials and other parameters as the original. This new deployment will use the restored PVs and hence the original data.

Refer to our detailed tutorial on backing up and restoring MongoDB® chart deployments on Kubernetes, which covers both these approaches, for more information.

Use custom Prometheus rules

Custom Prometheus rules can be defined for the Prometheus Operator by using the prometheusRule parameter. A basic configuration example is shown below:

    metrics:
      enabled: true
      prometheusRule:
        enabled: true
        rules:
        - name: rule1
          rules:
          - alert: HighRequestLatency
            expr: job:request_latency_seconds:mean5m{job="myjob"} > 0.5
            for: 10m
            labels:
              severity: page
            annotations:
              summary: High request latency

Enable SSL/TLS

This chart supports enabling SSL/TLS between nodes in the cluster, as well as between MongoDB(®) clients and nodes, by setting the MONGODB_EXTRA_FLAGS and MONGODB_CLIENT_EXTRA_FLAGS container environment variables, together with the correct MONGODB_ADVERTISED_HOSTNAME. To enable full TLS encryption, set the tls.enabled parameter to true.

Generate the self-signed certificates via pre-install Helm hooks

The secrets-ca.yaml file utilizes the Helm "pre-install" hook to ensure that the certificates will only be generated on chart install.

The genCA() function will create a new self-signed x509 certificate authority. The genSignedCert() function creates an object with the certificate and key, which are base64-encoded and used in a YAML-like object. The genSignedCert() function is passed the CN, an empty IP list (the nil part), the validity and the CA created previously.

A Kubernetes Secret is used to hold the signed certificate created above, and the initContainer sets up the rest. Using Helm's hook annotations ensures that the certificates will only be generated on chart install. This will prevent overriding the certificates if the chart is upgraded.

Use your own CA

To use your own CA, set tls.caCert and tls.caKey with appropriate base64 encoded data. The secrets-ca.yaml file will utilize this data to create the Secret.

NOTE: Currently, only RSA private keys are supported.

Access the cluster

To access the cluster, enable the init container which generates the MongoDB(®) server/client PEM key needed to access the cluster. Please be sure to include the $my_hostname section with your actual hostname, and the alternative hostnames section should contain the hostnames that should be allowed access to the MongoDB(®) replicaset. Additionally, if external access is enabled, the load balancer IP addresses are added to the alternative names list.

NOTE: You will be generating self-signed certificates for the MongoDB(®) deployment. The init container generates a new MongoDB(®) private key which will be used to create a Certificate Authority (CA) and the public certificate for the CA. The Certificate Signing Request will be created as well and signed using the private key of the CA previously created. Finally, the PEM bundle will be created using the private key and public certificate. This process will be repeated for each node in the cluster.

Start the cluster

After the certificates have been generated and made available to the containers at the correct mount points, the MongoDB(®) server will be started with TLS enabled. The options for the TLS mode will be one of disabled, allowTLS, preferTLS, or requireTLS. This value can be changed via the MONGODB_EXTRA_FLAGS field using the tlsMode parameter. The client should now be able to connect to the TLS-enabled cluster with the provided certificates.

Set Pod affinity

This chart allows you to set your custom affinity using the XXX.affinity parameter(s). Find more information about Pod affinity in the Kubernetes documentation.

As an alternative, you can use the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the XXX.podAffinityPreset, XXX.podAntiAffinityPreset, or XXX.nodeAffinityPreset parameters.

Troubleshooting

Find more information about how to deal with common errors related to Bitnami's Helm charts in this troubleshooting guide.

Upgrading

If authentication is enabled, it's necessary to set the auth.rootPassword (also auth.replicaSetKey when using a replicaset architecture) when upgrading for readiness/liveness probes to work properly. When you install this chart for the first time, some notes will be displayed providing the credentials you must use under the 'Credentials' section. Please note down the password, and run the command below to upgrade your chart:

helm upgrade my-release oci://REGISTRY_NAME/REPOSITORY_NAME/mongodb --set auth.rootPassword=[PASSWORD] (--set auth.replicaSetKey=[REPLICASETKEY])

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Note: you need to substitute the placeholders [PASSWORD] and [REPLICASETKEY] with the values obtained in the installation notes.

To 12.0.0

This major release renames several values in this chart and adds missing features, in order to be inline with the rest of assets in the Bitnami charts repository.

Affected values:

  • strategyType is replaced by updateStrategy
  • service.port is renamed to service.ports.mongodb
  • service.nodePort is renamed to service.nodePorts.mongodb
  • externalAccess.service.port is renamed to externalAccess.hidden.service.ports.mongodb
  • rbac.role.rules is renamed to rbac.rules
  • externalAccess.hidden.service.port is renamed ot externalAccess.hidden.service.ports.mongodb
  • hidden.strategyType is replaced by hidden.updateStrategy
  • metrics.serviceMonitor.relabellings is renamed to metrics.serviceMonitor.relabelings(typo fixed)
  • metrics.serviceMonitor.additionalLabels is renamed to metrics.serviceMonitor.labels

Additionally also updates the MongoDB image dependency to it newest major, 5.0

To 11.0.0

In this version, the mongodb-exporter bundled as part of this Helm chart was updated to a new version which, even it is not a major change, can contain breaking changes (from 0.11.X to 0.30.X). Please visit the release notes from the upstream project at https://github.com/percona/mongodb_exporter/releases

To 10.0.0

On November 13, 2020, Helm v2 support formally ended. This major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.

To 9.0.0

MongoDB(®) container images were updated to 4.4.x and it can affect compatibility with older versions of MongoDB(®). Refer to the following guides to upgrade your applications:

To 8.0.0

  • Architecture used to configure MongoDB(®) as a replicaset was completely refactored. Now, both primary and secondary nodes are part of the same statefulset.
  • Chart labels were adapted to follow the Helm charts best practices.
  • This version introduces bitnami/common, a library chart as a dependency. More documentation about this new utility could be found here. Please, make sure that you have updated the chart dependencies before executing any upgrade.
  • Several parameters were renamed or disappeared in favor of new ones on this major version. These are the most important ones:
    • replicas is renamed to replicaCount.
    • Authentication parameters are reorganized under the auth.* parameter:
      • usePassword is renamed to auth.enabled.
      • mongodbRootPassword, mongodbUsername, mongodbPassword, mongodbDatabase, and replicaSet.key are now auth.rootPassword, auth.username, auth.password, auth.database, and auth.replicaSetKey respectively.
    • securityContext.* is deprecated in favor of podSecurityContext and containerSecurityContext.
    • Parameters prefixed with mongodb are renamed removing the prefix. E.g. mongodbEnableIPv6 is renamed to enableIPv6.
    • Parameters affecting Arbiter nodes are reorganized under the arbiter.* parameter.

Consequences:

  • Backwards compatibility is not guaranteed. To upgrade to 8.0.0, install a new release of the MongoDB(®) chart, and migrate your data by creating a backup of the database, and restoring it on the new release.

To 7.0.0

From this version, the way of setting the ingress rules has changed. Instead of using ingress.paths and ingress.hosts as separate objects, you should now define the rules as objects inside the ingress.hosts value, for example:

ingress:
  hosts:
    - name: mongodb.local
      path: /

To 6.0.0

From this version, mongodbEnableIPv6 is set to false by default in order to work properly in most k8s clusters, if you want to use IPv6 support, you need to set this variable to true by adding --set mongodbEnableIPv6=true to your helm command. You can find more information in the bitnami/mongodb image README.

To 5.0.0

When enabling replicaset configuration, backwards compatibility is not guaranteed unless you modify the labels used on the chart's statefulsets. Use the workaround below to upgrade from versions previous to 5.0.0. The following example assumes that the release name is my-release:

kubectl delete statefulset my-release-mongodb-arbiter my-release-mongodb-primary my-release-mongodb-secondary --cascade=false

Add extra deployment options

To add extra deployments (useful for advanced features like sidecars), use the extraDeploy property.

In the example below, you can find how to use a example here for a MongoDB replica set pod labeler sidecar to identify the primary pod and dynamically label it as the primary node:

extraDeploy:
  - apiVersion: v1
    kind: Service
    metadata:
      name: mongodb-primary
      namespace: default
      labels:
        app.kubernetes.io/component: mongodb
        app.kubernetes.io/instance: mongodb
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: mongodb
    spec:
      type: NodePort
      externalTrafficPolicy: Cluster
      ports:
        - name: mongodb-primary
          port: 30001
          nodePort: 30001
          protocol: TCP
          targetPort: mongodb
      selector:
        app.kubernetes.io/component: mongodb
        app.kubernetes.io/instance: mongodb
        app.kubernetes.io/name: mongodb
        primary: "true"

License

Copyright © 2024 Broadcom. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.