tree: 0dae484428c84640c898868f19746905da01a543 [path history] [tgz]
  1. charts/
  2. templates/
  3. .helmignore
  4. Chart.yaml
  5. README.md
  6. requirements.yaml
  7. values.yaml
helm/infrastructure/subcharts/prometheus/README.md

Prometheus

Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.

TL;DR;

$ helm install stable/prometheus

Introduction

This chart bootstraps a Prometheus deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.3+ with Beta APIs enabled

Installing the Chart

To install the chart with the release name my-release:

$ helm install --name my-release stable/prometheus

The command deploys Prometheus on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Prometheus 2.x

Prometheus version 2.x has made changes to alertmanager, storage and recording rules. Check out the migration guide here

Users of this chart will need to update their alerting rules to the new format before they can upgrade.

Upgrading from previous chart versions.

Version 9.0 adds a new option to enable or disable the Prometheus Server. This supports the use case of running a Prometheus server in one k8s cluster and scraping exporters in another cluster while using the same chart for each deployment. To install the server server.enabled must be set to true.

As of version 5.0, this chart uses Prometheus 2.x. This version of prometheus introduces a new data format and is not compatible with prometheus 1.x. It is recommended to install this as a new release, as updating existing releases will not work. See the prometheus docs for instructions on retaining your old data.

Example migration

Assuming you have an existing release of the prometheus chart, named prometheus-old. In order to update to prometheus 2.x while keeping your old data do the following:

  1. Update the prometheus-old release. Disable scraping on every component besides the prometheus server, similar to the configuration below:

    	alertmanager:
    	  enabled: false
    	alertmanagerFiles:
    	  alertmanager.yml: ""
    	kubeStateMetrics:
    	  enabled: false
    	nodeExporter:
    	  enabled: false
    	pushgateway:
    	  enabled: false
    	server:
    	  extraArgs:
    	    storage.local.retention: 720h
    	serverFiles:
    	  alerts: ""
    	  prometheus.yml: ""
    	  rules: ""
    
  2. Deploy a new release of the chart with version 5.0+ using prometheus 2.x. In the values.yaml set the scrape config as usual, and also add the prometheus-old instance as a remote-read target.

    	  prometheus.yml:
    	    ...
    	    remote_read:
    	    - url: http://prometheus-old/api/v1/read
    	    ...
    

    Old data will be available when you query the new prometheus instance.

Scraping Pod Metrics via Annotations

This chart uses a default configuration that causes prometheus to scrape a variety of kubernetes resource types, provided they have the correct annotations. In this section we describe how to configure pods to be scraped; for information on how other resource types can be scraped you can do a helm template to get the kubernetes resource definitions, and then reference the prometheus configuration in the ConfigMap against the prometheus documentation for relabel_config and kubernetes_sd_config.

In order to get prometheus to scrape pods, you must add annotations to the the pods as below:

metadata:
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/path: /metrics
    prometheus.io/port: "8080"
spec:
...

You should adjust prometheus.io/path based on the URL that your pod serves metrics from. prometheus.io/port should be set to the port that your pod serves metrics from. Note that the values for prometheus.io/scrape and prometheus.io/port must be enclosed in double quotes.

Configuration

The following table lists the configurable parameters of the Prometheus chart and their default values.

ParameterDescriptionDefault
alertmanager.enabledIf true, create alertmanagertrue
alertmanager.namealertmanager container namealertmanager
alertmanager.image.repositoryalertmanager container image repositoryprom/alertmanager
alertmanager.image.tagalertmanager container image tagv0.20.0
alertmanager.image.pullPolicyalertmanager container image pull policyIfNotPresent
alertmanager.prefixURLThe prefix slug at which the server can be accessed``
alertmanager.baseURLThe external url at which the server can be accessed"http://localhost:9093"
alertmanager.extraArgsAdditional alertmanager container arguments{}
alertmanager.extraSecretMountsAdditional alertmanager Secret mounts[]
alertmanager.configMapOverrideNamePrometheus alertmanager ConfigMap override where full-name is {{.Release.Name}}-{{.Values.alertmanager.configMapOverrideName}} and setting this value will prevent the default alertmanager ConfigMap from being generated""
alertmanager.configFromSecretThe name of a secret in the same kubernetes namespace which contains the Alertmanager config, setting this value will prevent the default alertmanager ConfigMap from being generated""
alertmanager.configFileNameThe configuration file name to be loaded to alertmanager. Must match the key within configuration loaded from ConfigMap/Secret.alertmanager.yml
alertmanager.ingress.enabledIf true, alertmanager Ingress will be createdfalse
alertmanager.ingress.annotationsalertmanager Ingress annotations{}
alertmanager.ingress.extraLabelsalertmanager Ingress additional labels{}
alertmanager.ingress.hostsalertmanager Ingress hostnames[]
alertmanager.ingress.extraPathsIngress extra paths to prepend to every alertmanager host configuration. Useful when configuring custom actions with AWS ALB Ingress Controller[]
alertmanager.ingress.tlsalertmanager Ingress TLS configuration (YAML)[]
alertmanager.nodeSelectornode labels for alertmanager pod assignment{}
alertmanager.tolerationsnode taints to tolerate (requires Kubernetes >=1.6)[]
alertmanager.affinitypod affinity{}
alertmanager.podDisruptionBudget.enabledIf true, create a PodDisruptionBudgetfalse
alertmanager.podDisruptionBudget.maxUnavailableMaximum unavailable instances in PDB1
alertmanager.schedulerNamealertmanager alternate scheduler namenil
alertmanager.persistentVolume.enabledIf true, alertmanager will create a Persistent Volume Claimtrue
alertmanager.persistentVolume.accessModesalertmanager data Persistent Volume access modes[ReadWriteOnce]
alertmanager.persistentVolume.annotationsAnnotations for alertmanager Persistent Volume Claim{}
alertmanager.persistentVolume.existingClaimalertmanager data Persistent Volume existing claim name""
alertmanager.persistentVolume.mountPathalertmanager data Persistent Volume mount root path/data
alertmanager.persistentVolume.sizealertmanager data Persistent Volume size2Gi
alertmanager.persistentVolume.storageClassalertmanager data Persistent Volume Storage Classunset
alertmanager.persistentVolume.volumeBindingModealertmanager data Persistent Volume Binding Modeunset
alertmanager.persistentVolume.subPathSubdirectory of alertmanager data Persistent Volume to mount""
alertmanager.podAnnotationsannotations to be added to alertmanager pods{}
alertmanager.podLabelslabels to be added to Prometheus AlertManager pods{}
alertmanager.podSecurityPolicy.annotationsSpecify pod annotations in the pod security policy{}
alertmanager.replicaCountdesired number of alertmanager pods1
alertmanager.statefulSet.enabledIf true, use a statefulset instead of a deployment for pod managementfalse
alertmanager.statefulSet.podManagementPolicypodManagementPolicy of alertmanager podsOrderedReady
alertmanager.statefulSet.headless.annotationsannotations for alertmanager headless service{}
alertmanager.statefulSet.headless.labelslabels for alertmanager headless service{}
alertmanager.statefulSet.headless.enableMeshPeerIf true, enable the mesh peer endpoint for the headless servicefalse
alertmanager.statefulSet.headless.servicePortalertmanager headless service port80
alertmanager.priorityClassNamealertmanager priorityClassNamenil
alertmanager.resourcesalertmanager pod resource requests & limits{}
alertmanager.securityContextCustom security context for Alert Manager containers{}
alertmanager.service.annotationsannotations for alertmanager service{}
alertmanager.service.clusterIPinternal alertmanager cluster service IP""
alertmanager.service.externalIPsalertmanager service external IP addresses[]
alertmanager.service.loadBalancerIPIP address to assign to load balancer (if supported)""
alertmanager.service.loadBalancerSourceRangeslist of IP CIDRs allowed access to load balancer (if supported)[]
alertmanager.service.servicePortalertmanager service port80
alertmanager.service.sessionAffinitySession Affinity for alertmanager service, can be None or ClientIPNone
alertmanager.service.typetype of alertmanager service to createClusterIP
alertmanager.strategyDeployment strategy{ "type": "RollingUpdate" }
alertmanagerFiles.alertmanager.ymlPrometheus alertmanager configurationexample configuration
configmapReload.prometheus.enabledIf false, the configmap-reload container for Prometheus will not be deployedtrue
configmapReload.prometheus.nameconfigmap-reload container nameconfigmap-reload
configmapReload.prometheus.image.repositoryconfigmap-reload container image repositoryjimmidyson/configmap-reload
configmapReload.prometheus.image.tagconfigmap-reload container image tagv0.3.0
configmapReload.prometheus.image.pullPolicyconfigmap-reload container image pull policyIfNotPresent
configmapReload.prometheus.extraArgsAdditional configmap-reload container arguments{}
configmapReload.prometheus.extraVolumeDirsAdditional configmap-reload volume directories{}
configmapReload.prometheus.extraConfigmapMountsAdditional configmap-reload configMap mounts[]
configmapReload.prometheus.resourcesconfigmap-reload pod resource requests & limits{}
configmapReload.alertmanager.enabledIf false, the configmap-reload container for AlertManager will not be deployedtrue
configmapReload.alertmanager.nameconfigmap-reload container nameconfigmap-reload
configmapReload.alertmanager.image.repositoryconfigmap-reload container image repositoryjimmidyson/configmap-reload
configmapReload.alertmanager.image.tagconfigmap-reload container image tagv0.3.0
configmapReload.alertmanager.image.pullPolicyconfigmap-reload container image pull policyIfNotPresent
configmapReload.alertmanager.extraArgsAdditional configmap-reload container arguments{}
configmapReload.alertmanager.extraVolumeDirsAdditional configmap-reload volume directories{}
configmapReload.alertmanager.extraConfigmapMountsAdditional configmap-reload configMap mounts[]
configmapReload.alertmanager.resourcesconfigmap-reload pod resource requests & limits{}
initChownData.enabledIf false, don't reset data ownership at startuptrue
initChownData.nameinit-chown-data container nameinit-chown-data
initChownData.image.repositoryinit-chown-data container image repositorybusybox
initChownData.image.taginit-chown-data container image taglatest
initChownData.image.pullPolicyinit-chown-data container image pull policyIfNotPresent
initChownData.resourcesinit-chown-data pod resource requests & limits{}
kubeStateMetrics.enabledIf true, create kube-state-metrics sub-chart, see the kube-state-metrics chart for configuration optionstrue
kube-state-metricskube-state-metrics configuration optionsSame as sub-chart's
nodeExporter.enabledIf true, create node-exportertrue
nodeExporter.namenode-exporter container namenode-exporter
nodeExporter.image.repositorynode-exporter container image repositoryprom/node-exporter
nodeExporter.image.tagnode-exporter container image tagv0.18.1
nodeExporter.image.pullPolicynode-exporter container image pull policyIfNotPresent
nodeExporter.extraArgsAdditional node-exporter container arguments{}
nodeExporter.extraInitContainersInit containers to launch alongside the node-exporter[]
nodeExporter.extraHostPathMountsAdditional node-exporter hostPath mounts[]
nodeExporter.extraConfigmapMountsAdditional node-exporter configMap mounts[]
nodeExporter.hostNetworkIf true, node-exporter pods share the host network namespacetrue
nodeExporter.hostPIDIf true, node-exporter pods share the host PID namespacetrue
nodeExporter.nodeSelectornode labels for node-exporter pod assignment{}
nodeExporter.podAnnotationsannotations to be added to node-exporter pods{}
nodeExporter.pod.labelslabels to be added to node-exporter pods{}
nodeExporter.podDisruptionBudget.enabledIf true, create a PodDisruptionBudgetfalse
nodeExporter.podDisruptionBudget.maxUnavailableMaximum unavailable instances in PDB1
nodeExporter.podSecurityPolicy.annotationsSpecify pod annotations in the pod security policy{}
nodeExporter.podSecurityPolicy.enabledSpecify if a Pod Security Policy for node-exporter must be createdfalse
nodeExporter.tolerationsnode taints to tolerate (requires Kubernetes >=1.6)[]
nodeExporter.priorityClassNamenode-exporter priorityClassNamenil
nodeExporter.resourcesnode-exporter resource requests and limits (YAML){}
nodeExporter.securityContextsecurityContext for containers in pod{}
nodeExporter.service.annotationsannotations for node-exporter service{prometheus.io/scrape: "true"}
nodeExporter.service.clusterIPinternal node-exporter cluster service IPNone
nodeExporter.service.externalIPsnode-exporter service external IP addresses[]
nodeExporter.service.hostPortnode-exporter service host port9100
nodeExporter.service.loadBalancerIPIP address to assign to load balancer (if supported)""
nodeExporter.service.loadBalancerSourceRangeslist of IP CIDRs allowed access to load balancer (if supported)[]
nodeExporter.service.servicePortnode-exporter service port9100
nodeExporter.service.typetype of node-exporter service to createClusterIP
podSecurityPolicy.enabledIf true, create & use pod security policies resourcesfalse
pushgateway.enabledIf true, create pushgatewaytrue
pushgateway.namepushgateway container namepushgateway
pushgateway.image.repositorypushgateway container image repositoryprom/pushgateway
pushgateway.image.tagpushgateway container image tagv1.0.1
pushgateway.image.pullPolicypushgateway container image pull policyIfNotPresent
pushgateway.extraArgsAdditional pushgateway container arguments{}
pushgateway.extraInitContainersInit containers to launch alongside the pushgateway[]
pushgateway.ingress.enabledIf true, pushgateway Ingress will be createdfalse
pushgateway.ingress.annotationspushgateway Ingress annotations{}
pushgateway.ingress.hostspushgateway Ingress hostnames[]
pushgateway.ingress.extraPathsIngress extra paths to prepend to every pushgateway host configuration. Useful when configuring custom actions with AWS ALB Ingress Controller[]
pushgateway.ingress.tlspushgateway Ingress TLS configuration (YAML)[]
pushgateway.nodeSelectornode labels for pushgateway pod assignment{}
pushgateway.podAnnotationsannotations to be added to pushgateway pods{}
pushgateway.podSecurityPolicy.annotationsSpecify pod annotations in the pod security policy{}
pushgateway.tolerationsnode taints to tolerate (requires Kubernetes >=1.6)[]
pushgateway.replicaCountdesired number of pushgateway pods1
pushgateway.podDisruptionBudget.enabledIf true, create a PodDisruptionBudgetfalse
pushgateway.podDisruptionBudget.maxUnavailableMaximum unavailable instances in PDB1
pushgateway.schedulerNamepushgateway alternate scheduler namenil
pushgateway.persistentVolume.enabledIf true, Prometheus pushgateway will create a Persistent Volume Claimfalse
pushgateway.persistentVolume.accessModesPrometheus pushgateway data Persistent Volume access modes[ReadWriteOnce]
pushgateway.persistentVolume.annotationsPrometheus pushgateway data Persistent Volume annotations{}
pushgateway.persistentVolume.existingClaimPrometheus pushgateway data Persistent Volume existing claim name""
pushgateway.persistentVolume.mountPathPrometheus pushgateway data Persistent Volume mount root path/data
pushgateway.persistentVolume.sizePrometheus pushgateway data Persistent Volume size2Gi
pushgateway.persistentVolume.storageClassPrometheus pushgateway data Persistent Volume Storage Classunset
pushgateway.persistentVolume.volumeBindingModePrometheus pushgateway data Persistent Volume Binding Modeunset
pushgateway.persistentVolume.subPathSubdirectory of Prometheus server data Persistent Volume to mount""
pushgateway.priorityClassNamepushgateway priorityClassNamenil
pushgateway.resourcespushgateway pod resource requests & limits{}
pushgateway.service.annotationsannotations for pushgateway service{}
pushgateway.service.clusterIPinternal pushgateway cluster service IP""
pushgateway.service.externalIPspushgateway service external IP addresses[]
pushgateway.service.loadBalancerIPIP address to assign to load balancer (if supported)""
pushgateway.service.loadBalancerSourceRangeslist of IP CIDRs allowed access to load balancer (if supported)[]
pushgateway.service.servicePortpushgateway service port9091
pushgateway.service.typetype of pushgateway service to createClusterIP
pushgateway.strategyDeployment strategy{ "type": "RollingUpdate" }
rbac.createIf true, create & use RBAC resourcestrue
server.enabledIf false, Prometheus server will not be createdtrue
server.namePrometheus server container nameserver
server.image.repositoryPrometheus server container image repositoryprom/prometheus
server.image.tagPrometheus server container image tagv2.18.1
server.image.pullPolicyPrometheus server container image pull policyIfNotPresent
server.configPathPath to a prometheus server config file on the container FS/etc/config/prometheus.yml
server.global.scrape_intervalHow frequently to scrape targets by default1m
server.global.scrape_timeoutHow long until a scrape request times out10s
server.global.evaluation_intervalHow frequently to evaluate rules1m
server.remoteWriteThe remote write feature of Prometheus allow transparently sending samples.[]
server.remoteReadThe remote read feature of Prometheus allow transparently receiving samples.[]
server.extraArgsAdditional Prometheus server container arguments{}
server.extraFlagsAdditional Prometheus server container flags["web.enable-lifecycle"]
server.extraInitContainersInit containers to launch alongside the server[]
server.prefixURLThe prefix slug at which the server can be accessed``
server.baseURLThe external url at which the server can be accessed``
server.envPrometheus server environment variables[]
server.extraHostPathMountsAdditional Prometheus server hostPath mounts[]
server.extraConfigmapMountsAdditional Prometheus server configMap mounts[]
server.extraSecretMountsAdditional Prometheus server Secret mounts[]
server.extraVolumeMountsAdditional Prometheus server Volume mounts[]
server.extraVolumesAdditional Prometheus server Volumes[]
server.configMapOverrideNamePrometheus server ConfigMap override where full-name is {{.Release.Name}}-{{.Values.server.configMapOverrideName}} and setting this value will prevent the default server ConfigMap from being generated""
server.ingress.enabledIf true, Prometheus server Ingress will be createdfalse
server.ingress.annotationsPrometheus server Ingress annotations[]
server.ingress.extraLabelsPrometheus server Ingress additional labels{}
server.ingress.hostsPrometheus server Ingress hostnames[]
server.ingress.extraPathsIngress extra paths to prepend to every Prometheus server host configuration. Useful when configuring custom actions with AWS ALB Ingress Controller[]
server.ingress.tlsPrometheus server Ingress TLS configuration (YAML)[]
server.nodeSelectornode labels for Prometheus server pod assignment{}
server.tolerationsnode taints to tolerate (requires Kubernetes >=1.6)[]
server.affinitypod affinity{}
server.podDisruptionBudget.enabledIf true, create a PodDisruptionBudgetfalse
server.podDisruptionBudget.maxUnavailableMaximum unavailable instances in PDB1
server.priorityClassNamePrometheus server priorityClassNamenil
server.schedulerNamePrometheus server alternate scheduler namenil
server.persistentVolume.enabledIf true, Prometheus server will create a Persistent Volume Claimtrue
server.persistentVolume.accessModesPrometheus server data Persistent Volume access modes[ReadWriteOnce]
server.persistentVolume.annotationsPrometheus server data Persistent Volume annotations{}
server.persistentVolume.existingClaimPrometheus server data Persistent Volume existing claim name""
server.persistentVolume.mountPathPrometheus server data Persistent Volume mount root path/data
server.persistentVolume.sizePrometheus server data Persistent Volume size8Gi
server.persistentVolume.storageClassPrometheus server data Persistent Volume Storage Classunset
server.persistentVolume.volumeBindingModePrometheus server data Persistent Volume Binding Modeunset
server.persistentVolume.subPathSubdirectory of Prometheus server data Persistent Volume to mount""
server.emptyDir.sizeLimitemptyDir sizeLimit if a Persistent Volume is not used""
server.podAnnotationsannotations to be added to Prometheus server pods{}
server.podLabelslabels to be added to Prometheus server pods{}
server.alertmanagersPrometheus AlertManager configuration for the Prometheus server{}
server.deploymentAnnotationsannotations to be added to Prometheus server deployment{}
server.podSecurityPolicy.annotationsSpecify pod annotations in the pod security policy{}
server.replicaCountdesired number of Prometheus server pods1
server.statefulSet.enabledIf true, use a statefulset instead of a deployment for pod managementfalse
server.statefulSet.annotationsannotations to be added to Prometheus server stateful set{}
server.statefulSet.labelslabels to be added to Prometheus server stateful set{}
server.statefulSet.podManagementPolicypodManagementPolicy of server podsOrderedReady
server.statefulSet.headless.annotationsannotations for Prometheus server headless service{}
server.statefulSet.headless.labelslabels for Prometheus server headless service{}
server.statefulSet.headless.servicePortPrometheus server headless service port80
server.resourcesPrometheus server resource requests and limits{}
server.verticalAutoscaler.enabledIf true a VPA object will be created for the controller (either StatefulSet or Deployemnt, based on above configs)false
server.securityContextCustom security context for server containers{}
server.service.annotationsannotations for Prometheus server service{}
server.service.clusterIPinternal Prometheus server cluster service IP""
server.service.externalIPsPrometheus server service external IP addresses[]
server.service.loadBalancerIPIP address to assign to load balancer (if supported)""
server.service.loadBalancerSourceRangeslist of IP CIDRs allowed access to load balancer (if supported)[]
server.service.nodePortPort to be used as the service NodePort (ignored if server.service.type is not NodePort)0
server.service.servicePortPrometheus server service port80
server.service.sessionAffinitySession Affinity for server service, can be None or ClientIPNone
server.service.typetype of Prometheus server service to createClusterIP
server.service.gRPC.enabledIf true, open a second port on the service for gRPCfalse
server.service.gRPC.servicePortPrometheus service gRPC port, (ignored if server.service.gRPC.enabled is not true)10901
server.service.gRPC.nodePortPort to be used as gRPC nodePort in the prometheus service0
server.service.statefulsetReplica.enabledIf true, send the traffic from the service to only one replica of the replicasetfalse
server.service.statefulsetReplica.replicaWhich replica to send the traffice to0
server.hostAliases/etc/hosts-entries in container(s)[]
server.sidecarContainersarray of snippets with your sidecar containers for prometheus server""
server.strategyDeployment strategy{ "type": "RollingUpdate" }
serviceAccounts.alertmanager.createIf true, create the alertmanager service accounttrue
serviceAccounts.alertmanager.namename of the alertmanager service account to use or create{{ prometheus.alertmanager.fullname }}
serviceAccounts.alertmanager.annotationsannotations for the alertmanager service account{}
serviceAccounts.nodeExporter.createIf true, create the nodeExporter service accounttrue
serviceAccounts.nodeExporter.namename of the nodeExporter service account to use or create{{ prometheus.nodeExporter.fullname }}
serviceAccounts.nodeExporter.annotationsannotations for the nodeExporter service account{}
serviceAccounts.pushgateway.createIf true, create the pushgateway service accounttrue
serviceAccounts.pushgateway.namename of the pushgateway service account to use or create{{ prometheus.pushgateway.fullname }}
serviceAccounts.pushgateway.annotationsannotations for the pushgateway service account{}
serviceAccounts.server.createIf true, create the server service accounttrue
serviceAccounts.server.namename of the server service account to use or create{{ prometheus.server.fullname }}
serviceAccounts.server.annotationsannotations for the server service account{}
server.terminationGracePeriodSecondsPrometheus server Pod termination grace period300
server.retention(optional) Prometheus data retention"15d"
serverFiles.alerts(Deprecated) Prometheus server alerts configuration{}
serverFiles.rules(Deprecated) Prometheus server rules configuration{}
serverFiles.alerting_rules.ymlPrometheus server alerts configuration{}
serverFiles.recording_rules.ymlPrometheus server rules configuration{}
serverFiles.prometheus.ymlPrometheus server scrape configurationexample configuration
extraScrapeConfigsPrometheus server additional scrape configuration""
alertRelabelConfigsPrometheus server alert relabeling configs for H/A prometheus""
networkPolicy.enabledEnable NetworkPolicyfalse
forceNamespaceForce resources to be namespacednull

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install stable/prometheus --name my-release \
    --set server.terminationGracePeriodSeconds=360

Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,

$ helm install stable/prometheus --name my-release -f values.yaml

Tip: You can use the default values.yaml

Note that you have multiple yaml files. This is particularly useful when you have alerts belonging to multiple services in the cluster. For example,

# values.yaml
# ...

# service1-alert.yaml
serverFiles:
  alerts:
    service1:
      - alert: anAlert
      # ...

# service2-alert.yaml
serverFiles:
  alerts:
    service2:
      - alert: anAlert
      # ...
$ helm install stable/prometheus --name my-release -f values.yaml -f service1-alert.yaml -f service2-alert.yaml

RBAC Configuration

Roles and RoleBindings resources will be created automatically for server service.

To manually setup RBAC you need to set the parameter rbac.create=false and specify the service account to be used for each service by setting the parameters: serviceAccounts.{{ component }}.create to false and serviceAccounts.{{ component }}.name to the name of a pre-existing service account.

Tip: You can refer to the default *-clusterrole.yaml and *-clusterrolebinding.yaml files in templates to customize your own.

ConfigMap Files

AlertManager is configured through alertmanager.yml. This file (and any others listed in alertmanagerFiles) will be mounted into the alertmanager pod.

Prometheus is configured through prometheus.yml. This file (and any others listed in serverFiles) will be mounted into the server pod.

Ingress TLS

If your cluster allows automatic creation/retrieval of TLS certificates (e.g. kube-lego), please refer to the documentation for that mechanism.

To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret in the namespace:

kubectl create secret tls prometheus-server-tls --cert=path/to/tls.cert --key=path/to/tls.key

Include the secret's name, along with the desired hostnames, in the alertmanager/server Ingress TLS section of your custom values.yaml file:

server:
  ingress:
    ## If true, Prometheus server Ingress will be created
    ##
    enabled: true

    ## Prometheus server Ingress hostnames
    ## Must be provided if Ingress is enabled
    ##
    hosts:
      - prometheus.domain.com

    ## Prometheus server Ingress TLS configuration
    ## Secrets must be manually created in the namespace
    ##
    tls:
      - secretName: prometheus-server-tls
        hosts:
          - prometheus.domain.com

NetworkPolicy

Enabling Network Policy for Prometheus will secure connections to Alert Manager and Kube State Metrics by only accepting connections from Prometheus Server. All inbound connections to Prometheus Server are still allowed.

To enable network policy for Prometheus, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set networkPolicy.enabled to true.

If NetworkPolicy is enabled for Prometheus' scrape targets, you may also need to manually create a networkpolicy which allows it.