InfluxDB Helm chart

InfluxDB is an open source time series database with no external dependencies. It's useful for recording metrics, events, and performing analytics.

The InfluxDB Helm chart uses the Helm package manager to bootstrap an InfluxDB StatefulSet and service on a Kubernetes cluster.

Note: ### If you're using the InfluxDB Enterprise Helm chart, check out InfluxDB Enterprise Helm chart.

Prerequisites

  • Helm v2 or later
  • Kubernetes 1.4+
  • (Optional) PersistentVolume (PV) provisioner support in the underlying infrastructure

Install the chart

  1. Add the InfluxData Helm repository:

    helm repo add influxdata https://helm.influxdata.com/
    
  2. Run the following command, providing a name for your release:

    helm upgrade --install my-release influxdata/influxdb
    

    Tip: --install can be shortened to -i.

    This command deploys InfluxDB on the Kubernetes cluster using the default configuration. To find parameters you can configure during installation, see Configure the chart.

    Tip: To view all Helm chart releases, run helm list.

Uninstall the chart

To uninstall the my-release deployment, use the following command:

helm uninstall my-release

This command removes all the Kubernetes components associated with the chart and deletes the release.

Configure the chart

The following table lists configurable parameters, their descriptions, and their default values stored in values.yaml.

ParameterDescriptionDefault
image.repositoryImage repository urlinfluxdb
image.tagImage tag1.8.0-alpine
image.pullPolicyImage pull policyIfNotPresent
image.pullSecretsIt will store the repository's credentials to pull imagenil
serviceAccount.createIt will create service accounttrue
serviceAccount.nameService account name""
serviceAccount.annotationsService account annotations{}
livenessProbeHealth check for pod{}
readinessProbeHealth check for pod{}
startupProbeHealth check for pod{}
service.typeKubernetes service typeClusterIP
service.loadBalancerIPA user-specified IP address for service type LoadBalancer to use as External IP (if supported)nil
service.externalIPsA user-specified list of externalIPs to add to the servicenil
service.externalTrafficPolicyA user specified external traffic policynil
persistence.enabledBoolean to enable and disable persistancetrue
persistence.existingClaimAn existing PersistentVolumeClaim, ignored if enterprise.enabled=truenil
persistence.storageClassIf set to "-", storageClassName: "", which disables dynamic provisioning. If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner. (gp2 on AWS, standard on GKE, AWS & OpenStack
persistence.annotationsAnnotations for volumeClaimTemplatesnil
persistence.accessModeAccess mode for the volumeReadWriteOnce
persistence.sizeStorage size8Gi
podAnnotationsAnnotations for pod{}
podLabelsLabels for pod{}
ingress.enabledBoolean flag to enable or disable ingressfalse
ingress.tlsBoolean to enable or disable tls for ingress. If enabled provide a secret in ingress.secretName containing TLS private key and certificate.false
ingress.secretNameKubernetes secret containing TLS private key and certificate. It is only required if ingress.tls is enabled.nil
ingress.hostnameHostname for the ingressinfluxdb.foobar.com
ingress.annotationsingress annotationsnil
schedulerNameUse an alternate scheduler, e.g. "stork".nil
nodeSelectorNode labels for pod assignment{}
affinityAffinity for pod assignment{
tolerationsTolerations for pod assignment[]
securityContextsecurityContext for pod{}
envenvironment variables for influxdb container{}
volumesvolumes stanza(s) to be used in the main containernil
mountPointsvolumeMount stanza(s) to be used in the main containernil
extraContainersAdditional containers to be added to the pod{}
config.reporting_disabledDetailsfalse
config.rpcRPC address for backup and storage{}
config.metaDetails{}
config.dataDetails{}
config.coordinatorDetails{}
config.retentionDetails{}
config.shard_precreationDetails{}
config.monitorDetails{}
config.httpDetails{}
config.loggingDetails{}
config.subscriberDetails{}
config.graphiteDetails{}
config.collectdDetails{}
config.opentsdbDetails{}
config.udpDetails{}
config.continous_queriesDetails{}
config.tlsDetails{}
initScripts.enabledBoolean flag to enable and disable initscripts. If the container finds any files with the extensions .sh or .iql inside of the /docker-entrypoint-initdb.d folder, it will execute them. The order they are executed in is determined by the shell. This is usually alphabetical order.false
initScripts.scriptsInit scripts{}
backup.enabledEnable backups, if true must configure one of the storage providersfalse
backup.gcsGoogle Cloud Storage confignil
backup.azureAzure Blob Storage confignil
backup.s3Amazon S3 (or compatible) confignil
backup.scheduleSchedule to run jobs in cron format0 0 * * *
backup.startingDeadlineSecondsDeadline in seconds for starting the job if it misses its scheduled time for any reasonnil
backup.annotationsAnnotations for backup cronjob{}
backup.podAnnotationsAnnotations for backup cronjob pods{}
backup.persistence.enabledBoolean to enable and disable persistancefalse
backup.persistence.storageClassIf set to "-", storageClassName: "", which disables dynamic provisioning. If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner. (gp2 on AWS, standard on GKE, AWS & OpenStack
backup.persistence.annotationsAnnotations for volumeClaimTemplatesnil
backup.persistence.accessModeAccess mode for the volumeReadWriteOnce
backup.persistence.sizeStorage size8Gi
backup.resourcesResources requests and limits for backup podsephemeral-storage: 8Gi

To configure the chart, do either of the following:

  • Specify each parameter using the --set key=value[,key=value] argument to helm upgrade --install. For example:

    helm upgrade --install my-release \
      --set persistence.enabled=true,persistence.size=200Gi \
        influxdata/influxdb
    

    This command enables persistence and changes the size of the requested data volume to 200GB.

  • Provide a YAML file that specifies the parameter values while installing the chart. For example, use the following command:

    helm upgrade --install my-release -f values.yaml influxdata/influxdb
    

    Tip: Use the default values.yaml.

For information about running InfluxDB in Docker, see the full image documentation.

InfluxDB Enterprise Helm chart

InfluxDB Enterprise includes features designed for production workloads, including high availability and horizontal scaling. InfluxDB Enterprise requires an InfluxDB Enterprise license.

Configure the InfluxDB Enterprise chart

To enable InfluxDB Enterprise, set the following keys and values in a values file provided to Helm.

KeyDescriptionRecommended value
livenessProbe.initalDelaySecondsUsed to allow enough time to join meta nodes to a cluster3600
image.tagSet to a data image. See https://hub.docker.com/_/influxdb for detailsdata
service.ClusterIPUse a headless service for StatefulSets"None"
env.name[_HOSTNAME]Used to provide a unique name.service for InfluxDB. See values.yaml for an examplevalueFrom.fieldRef.fieldPath: metadata.name
enterprise.enabledCreate StatefulSets for use with influx-data and influx-meta imagestrue
enterprise.licensekeyLicense for InfluxDB Enterprise
enterprise.clusterSizeReplicas for influx StatefulSetDependent on license
enterprise.meta.image.tagSet to an meta image. See https://hub.docker.com/_/influxdb for detailsmeta
enterprise.meta.clusterSizeReplicas for influxdb-meta StatefulSet.3
enterprise.meta.resourcesResources requests and limits for meta influxdb-meta podsSee values.yaml

Join pods to InfluxDB Enterprise cluster

Meta and data pods must be joined using the command influxd-ctl found on meta pods. We recommend running influxd-ctl on one and only one meta pod and joining meta pods together before data pods. For each meta pod, run influxd-ctl.

In the following examples, we use the pod names influxdb-meta-0 and influxdb-0 and the service name influxdb.

For example, using the default settings, your script should look something like this:

kubectl exec influxdb-meta-0 influxd-ctl add-meta influxdb-meta-0.influxdb-meta:8091

From the same meta pod, for each data pod, run influxd-ctl. With default settings, your script should look something like this:

kubectl exec influxdb-meta-0 influxd-ctl add-data influxdb-0.influxdb:8088

When using influxd-ctl, use the appropriate DNS name for your pods, following the naming scheme of pod.service.

Persistence

The InfluxDB image stores data in the /var/lib/influxdb directory in the container.

If persistence is enabled, a Persistent Volume associated with StatefulSet is provisioned. The volume is created using dynamic volume provisioning. In case of a disruption (for example, a node drain), Kubernetes ensures that the same volume is reattached to the Pod, preventing any data loss. However, when persistence is not enabled, InfluxDB data is stored in an empty directory, so if a Pod restarts, data is lost.

Start with authentication

In values.yaml, change .Values.config.http.auth-enabled to true.

Note: To enforce authentication, InfluxDB requires an admin user to be set up. For details, see Set up authentication.

To handle this set up during startup, enable a job in values.yaml by setting .Values.setDefaultUser.enabled to true.

Make sure to uncomment or configure the job settings after enabling it. If a password is not set, a random password will be generated.

Alternatively, if .Values.setDefaultUser.user.existingSecret is set the user and password are obtained from an existing Secret, the expected keys are influxdb-user and influxdb-password. Use this variable if you need to check in the values.yaml in a repository to avoid exposing your secrets.

Back up and restore

Before proceeding, please read Backing up and restoring in InfluxDB OSS. While the chart offers backups by means of the backup-cronjob, restores do not fall under the chart's scope today but can be achieved by one-off kubernetes jobs.

Backups

When enabled, thebackup-cronjob runs on the configured schedule. One can create a job from the backup cronjob on demand as follows:

kubectl create job --from=cronjobs/influxdb-backup influx-backup-$(date +%Y%m%d%H%M%S)

Backup Storage

The backup process consists of an init-container that writes the backup to a local volume, which is by default an emptyDir, shared to the runtime container which uploads the backup to the configured object store.

In order to avoid filling the node's disk space, it is recommended to set a sufficient ephemeral-storage request or enable persistence, which allocates a PVC.

Furthermore, if no object store provider is available, one can simply use the PVC as the final storage destination when persistence is enabled.

Restores

It is up to the end user to configure their own one-off restore jobs. Below is just an example, which assumes that the backups are stored in GCS and that all dbs in the backup already exist and should be restored. It is to be used as a reference only; configure the init-container and the command and of the influxdb-restore container as well as both containers' resources to suit your needs.

apiVersion: batch/v1
kind: Job
metadata:
  generateName: influxdb-restore-
  namespace: monitoring
spec:
  template:
    spec:
      volumes:
        - name: backup
          emptyDir: {}
      serviceAccountName: influxdb
      initContainers:
        - name: init-gsutil-cp
          image: google/cloud-sdk:alpine
          command:
            - /bin/sh
          args:
            - "-c"
            - |
              gsutil -m cp -r gs://<PATH TO BACKUP FOLDER>/* /backup
          volumeMounts:
            - name: backup
              mountPath: /backup
          resources:
            requests:
              cpu: 1
              memory: 4Gi
            limits:
              cpu: 2
              memory: 8Gi
      containers:
        - name: influxdb-restore
          image: influxdb:1.7-alpine
          volumeMounts:
            - name: backup
              mountPath: /backup
          command:
            - /bin/sh
          args:
            - "-c"
            - |
              #!/bin/sh
              INFLUXDB_HOST=influxdb.monitoring.svc
              for db in $(influx -host $INFLUXDB_HOST -execute 'SHOW DATABASES' | tail -n +5); do
                influxd restore -host $INFLUXDB_HOST:8088 -portable -db "$db" -newdb "$db"_bak /backup
              done
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 500m
              memory: 512Mi
      restartPolicy: OnFailure

At which point the data from the new <db name>_bak dbs would have to be side loaded into the original dbs. Please see InfluxDB documentation for more restore examples.

Mounting Extra Volumes

Extra volumes can be mounted by providing the volumes and mountPoints keys, consistent with the behavior of other charts provided by Influxdata.

volumes:
- name: ssl-cert-volume
  secret:
    secretName: secret-name
mountPoints:
- name: ssl-cert-volume
  mountPath: /etc/ssl/certs/selfsigned/
  readOnly: true

Upgrading

From < 1.0.0 To >= 1.0.0

Values .Values.config.bind_address and .Values.exposeRpc no longer exist. They have been replaced with .Values.config.rpc.bind_address and .Values.config.rpc.enabled respectively. Please adjust your values file accordingly.

From < 1.5.0 to >= 2.0.0

The Kubernetes API change to support 1.160 may not be backwards compatible and may require the chart to be uninstalled in order to upgrade. See this issue for some background.

From < 3.0.0 to >= 3.0.0

Since version 3.0.0 this chart uses a StatefulSet instead of a Deployment. As part of this update the existing persistent volume (and all data) is deleted and a new one is created. Make sure to backup and restore the data manually.

From < 4.0.0 to >= 4.0.0

Labels are changed in accordance with Kubernetes recommended labels. This change also removes the ability to configure clusterIP value to avoid Error: UPGRADE FAILED: failed to replace object: Service "my-influxdb" is invalid: spec.clusterIP: Invalid value: "": field is immutable type errors. For more information on this error and why it's important to avoid this error, please see this Github issue.

Due to the significance of the changes, we recommend uninstalling and reinstalling the chart (although the PVC shouldn't be deleted during this process, we highly recommended backing up your data beforehand).

Check out our Slack channel for support and information.