oauth2-proxy

oauth2-proxy is a reverse proxy and static file server that provides authentication using Providers (Google, GitHub, and others) to validate accounts by email, domain or group.

TL;DR;

$ helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
$ helm install my-release oauth2-proxy/oauth2-proxy

Introduction

This chart bootstraps an oauth2-proxy deployment on a Kubernetes cluster using the Helm package manager.

Installing the Chart

To install the chart with the release name my-release:

$ helm install my-release oauth2-proxy/oauth2-proxy

The command deploys oauth2-proxy on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm uninstall my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Upgrading an existing Release to a new major version

A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an incompatible breaking change needing manual actions.

To 1.0.0

This version upgrades oauth2-proxy to v4.0.0. Please see the changelog in order to upgrade.

To 2.0.0

Version 2.0.0 of this chart introduces support for Kubernetes v1.16.x by way of addressing the deprecation of the Deployment object apiVersion apps/v1beta2. See the v1.16 API deprecations page for more information.

Due to this issue there may be errors performing a helm upgrade of this chart from versions earlier than 2.0.0.

To 3.0.0

Version 3.0.0 introduces support for EKS IAM roles for service accounts by adding a managed service account to the chart. This is a breaking change since the service account is enabled by default. To disable this behaviour set serviceAccount.enabled to false

To 4.0.0

Version 4.0.0 adds support for the new Ingress apiVersion networking.k8s.io/v1. Therefore the ingress.extraPaths parameter needs to be updated to the new format. See the v1.22 API deprecations guide for more information.

For the same reason service.port was renamed to service.portNumber.

To 5.0.0

Version 5.0.0 introduces support for custom labels and refactor Kubernetes recommended labels. This is a breaking change because many labels of all resources need to be updated to stay consistent.

In order to upgrade, delete the Deployment before upgrading:

kubectl delete deployment my-release-oauth2-proxy

This will introduce a slight downtime.

For users who don't want downtime, you can perform these actions:

  • Perform a non-cascading removal of the deployment that keeps the pods running
  • Add new labels to pods
  • Perform helm upgrade

To 6.0.0

Version 6.0.0 bumps the version of the redis subchart from ~10.6.0 to ~16.4.0. You probably need to adjust your redis config. See here for detailed upgrade instructions.

Configuration

The following table lists the configurable parameters of the oauth2-proxy chart and their default values.

ParameterDescriptionDefault
affinitynode/pod affinitiesNone
authenticatedEmailsFile.enabledEnables authorize individual email addressesfalse
authenticatedEmailsFile.persistenceDefines how the email addresses file will be projected, via a configmap or secretconfigmap
authenticatedEmailsFile.templateName of the configmap or secret that is handled outside of that chart""
authenticatedEmailsFile.restrictedUserAccessKeyThe key of the configmap or secret that holds the email addresses list""
authenticatedEmailsFile.restricted_accessemail addresses list config""
authenticatedEmailsFile.annotationsconfigmap or secret annotationsnil
config.clientIDoauth client ID""
config.clientSecretoauth client secret""
config.cookieSecretserver specific cookie for the secret; create a new one with openssl rand -base64 32 | head -c 32 | base64""
config.existingSecretexisting Kubernetes secret to use for OAuth2 credentials. See oauth2-proxy.secrets helper for the required valuesnil
config.configFilecustom oauth2_proxy.cfg contents for settings not overridable via environment nor command line""
config.existingConfigexisting Kubernetes configmap to use for the configuration file. See config template for the required valuesnil
config.cookieNameThe name of the cookie that oauth2-proxy will create.""
alphaConfig.enabledFlag to toggle any alpha config related logicfalse
alphaConfig.annotationsConfigmap annotations{}
alphaConfig.serverConfigDataArbitrary configuration data to append to the server section{}
alphaConfig.metricsConfigDataArbitrary configuration data to append to the metrics section{}
alphaConfig.configDataArbitrary configuration data to append{}
alphaConfig.configFileArbitrary configuration to append, treated as a Go template and rendered with the root context""
alphaConfig.existingConfigexisting Kubernetes configmap to use for the alpha configuration file. See config template for the required valuesnil
alphaConfig.existingSecretexisting Kubernetes secret to use for the alpha configuration file. See config template for the required valuesnil
customLabelsCustom labels to add into metadata{}
config.google.adminEmailuser impersonated by the google service account""
config.google.useApplicationDefaultCredentialsuse the application-default credentials (i.e. Workload Identity on GKE) instead of providing a service account jsonfalse
config.google.targetPrincipalservice account to use/impersonate""
config.google.serviceAccountJsongoogle service account json contents""
config.google.existingConfigexisting Kubernetes configmap to use for the service account file. See google secret template for the required valuesnil
config.google.groupsrestrict logins to members of these google groups[]
containerPortused to customise port on the deployment""
extraArgsExtra arguments to give the binary. Either as a map with key:value pairs or as a list type, which allows to configure the same flag multiple times. (e.g. ["--allowed-role=CLIENT_ID:CLIENT_ROLE_NAME_A", "--allowed-role=CLIENT_ID:CLIENT_ROLE_NAME_B"]).{} or []
extraContainersList of extra containers to be added to the pod[]
extraEnvkey:value list of extra environment variables to give the binary[]
extraVolumeslist of extra volumes[]
extraVolumeMountslist of extra volumeMounts[]
hostAliaseshostAliases is a list of aliases to be added to /etc/hosts for network name resolution.
htpasswdFile.enabledenable htpasswd-file optionfalse
htpasswdFile.entrieslist of encrypted user:passwords{}
htpasswdFile.existingSecretexisting Kubernetes secret to use for OAuth2 htpasswd file""
httpSchemehttp or https. name used for port on the deployment. httpGet port name and scheme used for liveness- and readinessProbes. name and targetPort used for the service.http
image.pullPolicyImage pull policyIfNotPresent
image.repositoryImage repositoryquay.io/oauth2-proxy/oauth2-proxy
image.tagImage tag"" (defaults to appVersion)
imagePullSecretsSpecify image pull secretsnil (does not add image pull secrets to deployed pods)
ingress.enabledEnable Ingressfalse
ingress.classNamename referencing IngressClassnil
ingress.pathIngress accepted path/
ingress.pathTypeIngress path typeImplementationSpecific
ingress.extraPathsIngress extra paths to prepend to every host configuration. Useful when configuring custom actions with AWS ALB Ingress Controller.[]
ingress.labelsIngress extra labels{}
ingress.annotationsIngress annotationsnil
ingress.hostsIngress accepted hostnamesnil
ingress.tlsIngress TLS configurationnil
initContainers.waitForRedis.enabledif redis.enabled is true, use an init container to wait for the redis master pod to be ready. If serviceAccount.enabled is true, create additionally a role/binding to get, list and watch the redis master podtrue
initContainers.waitForRedis.image.pullPolicykubectl image pull policyIfNotPresent
initContainers.waitForRedis.image.repositorykubectl image repositorydocker.io/bitnami/kubectl
initContainers.waitForRedis.kubectlVersionkubectl version to use for the init container`printf "%s.%s" .Capabilities.KubeVersion.Major (.Capabilities.KubeVersion.Minor
initContainers.waitForRedis.securityContext.enabledenable Kubernetes security context on containertrue
initContainers.waitForRedis.timeoutnumber of seconds180
initContainers.waitForRedis.resourcespod resource requests & limits{}
livenessProbe.enabledenable Kubernetes livenessProbe. Disable to use oauth2-proxy with Istio mTLS. See Istio FAQtrue
livenessProbe.initialDelaySecondsnumber of seconds0
livenessProbe.timeoutSecondsnumber of seconds1
namespaceOverrideOverride the deployment namespace""
nodeSelectornode labels for pod assignment{}
deploymentAnnotationsannotations to add to the deployment{}
podAnnotationsannotations to add to each pod{}
podLabelsadditional labesl to add to each pod{}
podDisruptionBudget.enabledEnabled creation of PodDisruptionBudget (only if replicaCount > 1)true
podDisruptionBudget.minAvailableminAvailable parameter for PodDisruptionBudget1
podSecurityContextKubernetes security context to apply to pod{}
priorityClassNamepriorityClassNamenil
readinessProbe.enabledenable Kubernetes readinessProbe. Disable to use oauth2-proxy with Istio mTLS. See Istio FAQtrue
readinessProbe.initialDelaySecondsnumber of seconds0
readinessProbe.timeoutSecondsnumber of seconds5
readinessProbe.periodSecondsnumber of seconds10
readinessProbe.successThresholdnumber of successes1
replicaCountdesired number of pods1
resourcespod resource requests & limits{}
revisionHistoryLimitmaximum number of revisions maintained10
service.portNumberport number for the service80
service.appProtocolapplication protocol on the port of the servicehttp
service.typetype of serviceClusterIP
service.clusterIPcluster ip addressnil
service.loadBalancerIPip of load balancernil
service.loadBalancerSourceRangesallowed source ranges in load balancernil
service.nodePortexternal port number for the service when service.type is NodePortnil
serviceAccount.enabledcreate a service accounttrue
serviceAccount.namethe service account name``
serviceAccount.annotations(optional) annotations for the service account{}
strategyconfigure deployment strategy{}
tolerationslist of node taints to tolerate[]
securityContext.enabledenable Kubernetes security context on containertrue
proxyVarsAsSecretschoose between environment values or secrets for setting up OAUTH2_PROXY variables. When set to false, remember to add the variables OAUTH2_PROXY_CLIENT_ID, OAUTH2_PROXY_CLIENT_SECRET, OAUTH2_PROXY_COOKIE_SECRET in extraEnvtrue
sessionStorage.typeSession storage type which can be one of the following: cookie or rediscookie
sessionStorage.redis.existingSecretName of the Kubernetes secret containing the redis & redis sentinel password values (see also sessionStorage.redis.passwordKey)""
sessionStorage.redis.passwordRedis password. Applicable for all Redis configurations. Taken from redis subchart secret if not set. sessionStorage.redis.existingSecret takes precedencenil
sessionStorage.redis.passwordKeyKey of the Kubernetes secret data containing the redis password valueredis-password
sessionStorage.redis.clientTypeAllows the user to select which type of client will be used for redis instance. Possible options are: sentinel, cluster or standalonestandalone
sessionStorage.redis.standalone.connectionUrlURL of redis standalone server for redis session storage (e.g. redis://HOST[:PORT]). Automatically generated if not set.""
sessionStorage.redis.cluster.connectionUrlsList of Redis cluster connection URLs (e.g. ["redis://127.0.0.1:8000", "redis://127.0.0.1:8000"])[]
sessionStorage.redis.sentinel.existingSecretName of the Kubernetes secret containing the redis sentinel password value (see also sessionStorage.redis.sentinel.passwordKey). Default: sessionStorage.redis.existingSecret""
sessionStorage.redis.sentinel.passwordRedis sentinel password. Used only for sentinel connection; any redis node passwords need to use sessionStorage.redis.passwordnil
sessionStorage.redis.sentinel.passwordKeyKey of the Kubernetes secret data containing the redis sentinel password valueredis-sentinel-password
sessionStorage.redis.sentinel.masterNameRedis sentinel master namenil
sessionStorage.redis.sentinel.connectionUrlsList of Redis sentinel connection URLs (e.g. ["redis://127.0.0.1:8000", "redis://127.0.0.1:8000"])[]
topologySpreadConstraintsList of pod topology spread constraints[]
redis.enabledEnable the redis subchart deploymentfalse
checkDeprecationEnable deprecation checkstrue
metrics.enabledEnable Prometheus metrics endpointtrue
metrics.portServe Prometheus metrics on this port44180
metrics.nodePortExternal port for the metrics when service.type is NodePortnil
metrics.service.appProtocolapplication protocol of the metrics port in the servicehttp
metrics.serviceMonitor.enabledEnable Prometheus Operator ServiceMonitorfalse
metrics.serviceMonitor.namespaceDefine the namespace where to deploy the ServiceMonitor resource""
metrics.serviceMonitor.prometheusInstancePrometheus Instance definitiondefault
metrics.serviceMonitor.intervalPrometheus scrape interval60s
metrics.serviceMonitor.scrapeTimeoutPrometheus scrape timeout30s
metrics.serviceMonitor.labelsAdd custom labels to the ServiceMonitor resource{}
metrics.serviceMonitor.schemeHTTP scheme to use for scraping. Can be used with tlsConfig for example if using istio mTLS.""
metrics.serviceMonitor.tlsConfigTLS configuration to use when scraping the endpoint. For example if using istio mTLS.{}
metrics.serviceMonitor.bearerTokenFilePath to bearer token file.""
metrics.serviceMonitor.annotationsUsed to pass annotations that are used by the Prometheus installed in your cluster{}
metrics.serviceMonitor.metricRelabelingsMetric relabel configs to apply to samples before ingestion.[]
metrics.serviceMonitor.relabelingsRelabel configs to apply to samples before ingestion.[]
extraObjectsExtra K8s manifests to deploy[]

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install my-release oauth2-proxy/oauth2-proxy \
  --set=image.tag=v0.0.2,resources.limits.cpu=200m

Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,

$ helm install my-release oauth2-proxy/oauth2-proxy -f values.yaml

Tip: You can use the default values.yaml

TLS Configuration

See: TLS Configuration. Use values.yaml like:

...
extraArgs:
  tls-cert-file: /path/to/cert.pem
  tls-key-file: /path/to/cert.key

extraVolumes:
  - name: ssl-cert
    secret:
      secretName: my-ssl-secret

extraVolumeMounts:
  - mountPath: /path/to/
    name: ssl-cert
...

With a secret called my-ssl-secret:

...
data:
  cert.pem: AB..==
  cert.key: CD..==

Extra environment variable templating

The extraEnv value supports the tpl function which evaluate strings as templates inside the deployment template. This is useful to pass a template string as a value to the chart's extra environment variables and to render external configuration environment values

...
tplValue: "This is a test value for the tpl function"
extraEnv:
  - name: TEST_ENV_VAR_1
    value: test_value_1
  - name: TEST_ENV_VAR_2
    value: '{{ .Values.tplValue }}'

Custom templates configuration

You can replace the default template files using a Kubernetes configMap volume. The default templates are the two files sign_in.html and error.html.

config:
  configFile: |
    ...
    custom_templates_dir = "/data/custom-templates"

extraVolumes:
  - name: custom-templates
    configMap:
      name: oauth2-proxy-custom-templates

extraVolumeMounts:
  - name: custom-templates
    mountPath: "/data/custom-templates"
    readOnly: true

extraObjects:
  - apiVersion: v1
    kind: ConfigMap
    metadata:
      name: oauth2-proxy-custom-templates
    data:
      sign_in.html: |
        <!DOCTYPE html>
        <html>
        <body>sign_in</body>
        </html>
      error.html: |
        <!DOCTYPE html>
        <html>
        <body>
        <h1>error</h1>
        <p>{{.StatusCode}}</p>
        </body>
        </html>

Multi whitelist-domain configuration

For using multi whitelist-domain configuration for one Oauth2-proxy instance, you have to use the config.configFile section.

It will be overwriting the /etc/oauth2_proxy/oauth2_proxy.cfg configuration file. In this example, Google provider is used, but you can find all other provider configuration here oauth_provider

config:
  ...
  clientID="$YOUR_GOOGLE_CLIENT_ID"
  clientSecret="$YOUR_GOOGLE_CLIENT_SECRET"
  cookieSecret="$YOUR_COOKIE_SECRET"
  configFile: |
    ...
    email_domains = [ "*" ]
    upstreams = [ "file:///dev/null" ]
    cookie_secure = "false"
    cookie_domains = [ ".domain.com", ".otherdomain.io" ]
    whitelist_domains = [ ".domain.com", ".otherdomain.io"]
    provider = "google"