blob: 59286375d6d6c164f490a552e6ac4c2c966b4a55 [file] [log] [blame]
Eric Debeau993b77b2020-08-19 15:30:00 +02001.. This work is licensed under a Creative Commons Attribution 4.0
2.. International License.
Roger Maitland953b5f12018-03-22 15:24:04 -04003.. http://creativecommons.org/licenses/by/4.0
Eric Debeau993b77b2020-08-19 15:30:00 +02004.. Copyright 2018-2020 Amdocs, Bell Canada, Orange, Samsung
efiacor0fb3b8f2022-10-28 15:29:26 +01005.. Modification copyright (C) 2022 Nordix Foundation
Roger Maitland953b5f12018-03-22 15:24:04 -04006
7.. Links
8.. _Helm: https://docs.helm.sh/
Roger Maitlandac643812018-03-28 09:52:34 -04009.. _Helm Charts: https://github.com/kubernetes/charts
Roger Maitland953b5f12018-03-22 15:24:04 -040010.. _Kubernetes: https://Kubernetes.io/
11.. _Docker: https://www.docker.com/
Eric Debeauc4e405f2020-12-07 14:49:52 +010012.. _Nexus: https://nexus.onap.org/
Roger Maitland953b5f12018-03-22 15:24:04 -040013.. _AWS Elastic Block Store: https://aws.amazon.com/ebs/
14.. _Azure File: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction
15.. _GCE Persistent Disk: https://cloud.google.com/compute/docs/disks/
16.. _Gluster FS: https://www.gluster.org/
17.. _Kubernetes Storage Class: https://Kubernetes.io/docs/concepts/storage/storage-classes/
18.. _Assigning Pods to Nodes: https://Kubernetes.io/docs/concepts/configuration/assign-pod-node/
19
Roger Maitlandac643812018-03-28 09:52:34 -040020
Roger Maitland953b5f12018-03-22 15:24:04 -040021.. _developer-guide-label:
22
23OOM Developer Guide
24###################
25
efiacor0fb3b8f2022-10-28 15:29:26 +010026.. figure:: ../../resources/images/oom_logo/oomLogoV2-medium.png
Roger Maitland953b5f12018-03-22 15:24:04 -040027 :align: right
28
29ONAP consists of a large number of components, each of which are substantial
30projects within themselves, which results in a high degree of complexity in
31deployment and management. To cope with this complexity the ONAP Operations
32Manager (OOM) uses a Helm_ model of ONAP - Helm being the primary management
33system for Kubernetes_ container systems - to drive all user driven life-cycle
34management operations. The Helm model of ONAP is composed of a set of
35hierarchical Helm charts that define the structure of the ONAP components and
36the configuration of these components. These charts are fully parameterized
37such that a single environment file defines all of the parameters needed to
38deploy ONAP. A user of ONAP may maintain several such environment files to
39control the deployment of ONAP in multiple environments such as development,
40pre-production, and production.
41
42The following sections describe how the ONAP Helm charts are constructed.
43
44.. contents::
45 :depth: 3
46 :local:
47..
48
49Container Background
50====================
51Linux containers allow for an application and all of its operating system
52dependencies to be packaged and deployed as a single unit without including a
53guest operating system as done with virtual machines. The most popular
54container solution is Docker_ which provides tools for container management
55like the Docker Host (dockerd) which can create, run, stop, move, or delete a
56container. Docker has a very popular registry of containers images that can be
57used by any Docker system; however, in the ONAP context, Docker images are
58built by the standard CI/CD flow and stored in Nexus_ repositories. OOM uses
59the "standard" ONAP docker containers and three new ones specifically created
60for OOM.
61
62Containers are isolated from each other primarily via name spaces within the
63Linux kernel without the need for multiple guest operating systems. As such,
64multiple containers can be deployed with little overhead such as all of ONAP
65can be deployed on a single host. With some optimization of the ONAP components
66(e.g. elimination of redundant database instances) it may be possible to deploy
67ONAP on a single laptop computer.
68
69Helm Charts
70===========
Roger Maitlandac643812018-03-28 09:52:34 -040071A Helm chart is a collection of files that describe a related set of Kubernetes
72resources. A simple chart might be used to deploy something simple, like a
73memcached pod, while a complex chart might contain many micro-service arranged
74in a hierarchy as found in the `aai` ONAP component.
Roger Maitland953b5f12018-03-22 15:24:04 -040075
Roger Maitlandac643812018-03-28 09:52:34 -040076Charts are created as files laid out in a particular directory tree, then they
77can be packaged into versioned archives to be deployed. There is a public
78archive of `Helm Charts`_ on GitHub that includes many technologies applicable
79to ONAP. Some of these charts have been used in ONAP and all of the ONAP charts
80have been created following the guidelines provided.
Roger Maitland953b5f12018-03-22 15:24:04 -040081
Roger Maitlandac643812018-03-28 09:52:34 -040082The top level of the ONAP charts is shown below:
Roger Maitland953b5f12018-03-22 15:24:04 -040083
Sylvain Desbureaux60c74802019-12-12 14:35:01 +010084.. code-block:: bash
Roger Maitland953b5f12018-03-22 15:24:04 -040085
Sylvain Desbureaux60c74802019-12-12 14:35:01 +010086 common
87 ├── cassandra
88 │   ├── Chart.yaml
Sylvain Desbureaux60c74802019-12-12 14:35:01 +010089 │   ├── resources
90 │   │   ├── config
91 │   │   │   └── docker-entrypoint.sh
92 │   │   ├── exec.py
93 │   │   └── restore.sh
94 │   ├── templates
95 │   │   ├── backup
96 │   │   │   ├── configmap.yaml
97 │   │   │   ├── cronjob.yaml
98 │   │   │   ├── pv.yaml
99 │   │   │   └── pvc.yaml
100 │   │   ├── configmap.yaml
101 │   │   ├── pv.yaml
102 │   │   ├── service.yaml
103 │   │   └── statefulset.yaml
104 │   └── values.yaml
105 ├── common
106 │   ├── Chart.yaml
107 │   ├── templates
108 │   │   ├── _createPassword.tpl
109 │   │   ├── _ingress.tpl
110 │   │   ├── _labels.tpl
111 │   │   ├── _mariadb.tpl
112 │   │   ├── _name.tpl
113 │   │   ├── _namespace.tpl
114 │   │   ├── _repository.tpl
115 │   │   ├── _resources.tpl
116 │   │   ├── _secret.yaml
117 │   │   ├── _service.tpl
118 │   │   ├── _storage.tpl
119 │   │   └── _tplValue.tpl
120 │   └── values.yaml
121 ├── ...
122 └── postgres-legacy
123    ├── Chart.yaml
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100124 ├── charts
125 └── configs
Roger Maitland953b5f12018-03-22 15:24:04 -0400126
Roger Maitlandac643812018-03-28 09:52:34 -0400127The common section of charts consists of a set of templates that assist with
guillaume.lambertf3319a82021-09-26 21:37:50 +0200128parameter substitution (`_name.tpl`, `_namespace.tpl` and others) and a set of
129charts for components used throughout ONAP. When the common components are used
130by other charts they are instantiated each time or we can deploy a shared
131instances for several components.
Roger Maitland953b5f12018-03-22 15:24:04 -0400132
Roger Maitlandac643812018-03-28 09:52:34 -0400133All of the ONAP components have charts that follow the pattern shown below:
Roger Maitland953b5f12018-03-22 15:24:04 -0400134
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100135.. code-block:: bash
Roger Maitland953b5f12018-03-22 15:24:04 -0400136
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100137 name-of-my-component
138 ├── Chart.yaml
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100139 ├── component
140 │   └── subcomponent-folder
141 ├── charts
142 │   └── subchart-folder
143 ├── resources
144 │   ├── folder1
145 │   │   ├── file1
146 │   │   └── file2
147 │   └── folder1
148 │   ├── file3
149 │   └── folder3
150 │      └── file4
151 ├── templates
152 │   ├── NOTES.txt
153 │   ├── configmap.yaml
154 │   ├── deployment.yaml
155 │   ├── ingress.yaml
156 │   ├── job.yaml
157 │   ├── secrets.yaml
158 │   └── service.yaml
159 └── values.yaml
Roger Maitland953b5f12018-03-22 15:24:04 -0400160
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100161Note that the component charts / components may include a hierarchy of sub
162components and in themselves can be quite complex.
Roger Maitland953b5f12018-03-22 15:24:04 -0400163
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100164You can use either `charts` or `components` folder for your subcomponents.
165`charts` folder means that the subcomponent will always been deployed.
Roger Maitland953b5f12018-03-22 15:24:04 -0400166
Eric Debeau993b77b2020-08-19 15:30:00 +0200167`components` folders means we can choose if we want to deploy the
168subcomponent.
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100169
170This choice is done in root `values.yaml`:
171
172.. code-block:: yaml
173
174 ---
175 global:
176 key: value
177
178 component1:
179 enabled: true
180 component2:
181 enabled: true
182
efiacor370c6dc2021-10-12 14:10:49 +0100183Then in `Chart.yaml` dependencies section, you'll use these values:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100184
185.. code-block:: yaml
186
187 ---
188 dependencies:
189 - name: common
190 version: ~x.y-0
191 repository: '@local'
192 - name: component1
193 version: ~x.y-0
194 repository: 'file://components/component1'
195 condition: component1.enabled
196 - name: component2
197 version: ~x.y-0
198 repository: 'file://components/component2'
199 condition: component2.enabled
Roger Maitland953b5f12018-03-22 15:24:04 -0400200
Roger Maitlandac643812018-03-28 09:52:34 -0400201Configuration of the components varies somewhat from component to component but
202generally follows the pattern of one or more `configmap.yaml` files which can
203directly provide configuration to the containers in addition to processing
204configuration files stored in the `config` directory. It is the responsibility
205of each ONAP component team to update these configuration files when changes
206are made to the project containers that impact configuration.
Roger Maitland953b5f12018-03-22 15:24:04 -0400207
Sylvain Desbureaux5b7440b2019-01-28 16:49:14 +0100208The following section describes how the hierarchical ONAP configuration system
209is key to management of such a large system.
Roger Maitland953b5f12018-03-22 15:24:04 -0400210
211Configuration Management
212========================
213
214ONAP is a large system composed of many components - each of which are complex
215systems in themselves - that needs to be deployed in a number of different
216ways. For example, within a single operator's network there may be R&D
217deployments under active development, pre-production versions undergoing system
218testing and production systems that are operating live networks. Each of these
219deployments will differ in significant ways, such as the version of the
220software images deployed. In addition, there may be a number of application
221specific configuration differences, such as operating system environment
222variables. The following describes how the Helm configuration management
223system is used within the OOM project to manage both ONAP infrastructure
224configuration as well as ONAP components configuration.
225
226One of the artifacts that OOM/Kubernetes uses to deploy ONAP components is the
227deployment specification, yet another yaml file. Within these deployment specs
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100228are a number of parameters as shown in the following example:
Roger Maitland953b5f12018-03-22 15:24:04 -0400229
230.. code-block:: yaml
231
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100232 apiVersion: apps/v1
233 kind: StatefulSet
Roger Maitland953b5f12018-03-22 15:24:04 -0400234 metadata:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100235 labels:
236 app.kubernetes.io/name: zookeeper
237 helm.sh/chart: zookeeper
238 app.kubernetes.io/component: server
239 app.kubernetes.io/managed-by: Tiller
240 app.kubernetes.io/instance: onap-oof
241 name: onap-oof-zookeeper
242 namespace: onap
Roger Maitland953b5f12018-03-22 15:24:04 -0400243 spec:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100244 <...>
245 replicas: 3
246 selector:
247 matchLabels:
248 app.kubernetes.io/name: zookeeper
249 app.kubernetes.io/component: server
250 app.kubernetes.io/instance: onap-oof
251 serviceName: onap-oof-zookeeper-headless
Roger Maitland953b5f12018-03-22 15:24:04 -0400252 template:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100253 metadata:
254 labels:
255 app.kubernetes.io/name: zookeeper
256 helm.sh/chart: zookeeper
257 app.kubernetes.io/component: server
258 app.kubernetes.io/managed-by: Tiller
259 app.kubernetes.io/instance: onap-oof
Roger Maitland953b5f12018-03-22 15:24:04 -0400260 spec:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100261 <...>
262 affinity:
Roger Maitland953b5f12018-03-22 15:24:04 -0400263 containers:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100264 - name: zookeeper
Roger Maitland953b5f12018-03-22 15:24:04 -0400265 <...>
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100266 image: gcr.io/google_samples/k8szk:v3
267 imagePullPolicy: Always
268 <...>
269 ports:
270 - containerPort: 2181
271 name: client
272 protocol: TCP
273 - containerPort: 3888
274 name: election
275 protocol: TCP
276 - containerPort: 2888
277 name: server
278 protocol: TCP
279 <...>
Roger Maitland953b5f12018-03-22 15:24:04 -0400280
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100281Note that within the statefulset specification, one of the container arguments
282is the key/value pair image: gcr.io/google_samples/k8szk:v3 which
283specifies the version of the zookeeper software to deploy. Although the
284statefulset specifications greatly simplify statefulset, maintenance of the
285statefulset specifications themselves become problematic as software versions
Roger Maitland953b5f12018-03-22 15:24:04 -0400286change over time or as different versions are required for different
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100287statefulsets. For example, if the R&D team needs to deploy a newer version of
Roger Maitland953b5f12018-03-22 15:24:04 -0400288mariadb than what is currently used in the production environment, they would
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100289need to clone the statefulset specification and change this value. Fortunately,
Roger Maitland953b5f12018-03-22 15:24:04 -0400290this problem has been solved with the templating capabilities of Helm.
291
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100292The following example shows how the statefulset specifications are modified to
Roger Maitland953b5f12018-03-22 15:24:04 -0400293incorporate Helm templates such that key/value pairs can be defined outside of
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100294the statefulset specifications and passed during instantiation of the component.
Roger Maitland953b5f12018-03-22 15:24:04 -0400295
296.. code-block:: yaml
297
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100298 apiVersion: apps/v1
299 kind: StatefulSet
Roger Maitland953b5f12018-03-22 15:24:04 -0400300 metadata:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100301 name: {{ include "common.fullname" . }}
302 namespace: {{ include "common.namespace" . }}
303 labels: {{- include "common.labels" . | nindent 4 }}
Roger Maitland953b5f12018-03-22 15:24:04 -0400304 spec:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100305 replicas: {{ .Values.replicaCount }}
306 selector:
307 matchLabels: {{- include "common.matchLabels" . | nindent 6 }}
308 # serviceName is only needed for StatefulSet
309 # put the postfix part only if you have add a postfix on the service name
310 serviceName: {{ include "common.servicename" . }}-{{ .Values.service.postfix }}
Roger Maitland953b5f12018-03-22 15:24:04 -0400311 <...>
312 template:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100313 metadata:
314 labels: {{- include "common.labels" . | nindent 8 }}
315 annotations: {{- include "common.tplValue" (dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
316 name: {{ include "common.name" . }}
Roger Maitland953b5f12018-03-22 15:24:04 -0400317 spec:
Roger Maitland953b5f12018-03-22 15:24:04 -0400318 <...>
Roger Maitland953b5f12018-03-22 15:24:04 -0400319 containers:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100320 - name: {{ include "common.name" . }}
321 image: {{ .Values.image }}
322 imagePullPolicy: {{ .Values.global.pullPolicy | default .Values.pullPolicy }}
323 ports:
324 {{- range $index, $port := .Values.service.ports }}
325 - containerPort: {{ $port.port }}
326 name: {{ $port.name }}
327 {{- end }}
328 {{- range $index, $port := .Values.service.headlessPorts }}
329 - containerPort: {{ $port.port }}
330 name: {{ $port.name }}
331 {{- end }}
332 <...>
Roger Maitland953b5f12018-03-22 15:24:04 -0400333
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100334This version of the statefulset specification has gone through the process of
335templating values that are likely to change between statefulsets. Note that the
336image is now specified as: image: {{ .Values.image }} instead of a
337string used previously. During the statefulset phase, Helm (actually the Helm
Roger Maitland953b5f12018-03-22 15:24:04 -0400338sub-component Tiller) substitutes the {{ .. }} entries with a variable defined
339in a values.yaml file. The content of this file is as follows:
340
341.. code-block:: yaml
342
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100343 <...>
344 image: gcr.io/google_samples/k8szk:v3
345 replicaCount: 3
346 <...>
Roger Maitland953b5f12018-03-22 15:24:04 -0400347
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100348
349Within the values.yaml file there is an image key with the value
350`gcr.io/google_samples/k8szk:v3` which is the same value used in
Roger Maitland953b5f12018-03-22 15:24:04 -0400351the non-templated version. Once all of the substitutions are complete, the
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100352resulting statefulset specification ready to be used by Kubernetes.
Roger Maitland953b5f12018-03-22 15:24:04 -0400353
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100354When creating a template consider the use of default values if appropriate.
355Helm templating has built in support for DEFAULT values, here is
Roger Maitland953b5f12018-03-22 15:24:04 -0400356an example:
357
358.. code-block:: yaml
359
360 imagePullSecrets:
361 - name: "{{ .Values.nsPrefix | default "onap" }}-docker-registry-key"
362
363The pipeline operator ("|") used here hints at that power of Helm templates in
364that much like an operating system command line the pipeline operator allow
365over 60 Helm functions to be embedded directly into the template (note that the
366Helm template language is a superset of the Go template language). These
367functions include simple string operations like upper and more complex flow
368control operations like if/else.
369
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100370OOM is mainly helm templating. In order to have consistent deployment of the
371different components of ONAP, some rules must be followed.
372
373Templates are provided in order to create Kubernetes resources (Secrets,
374Ingress, Services, ...) or part of Kubernetes resources (names, labels,
375resources requests and limits, ...).
376
Sylvain Desbureaux88b2f922020-03-04 11:31:11 +0100377a full list and simple description is done in
378`kubernetes/common/common/documentation.rst`.
379
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100380Service template
381----------------
382
383In order to create a Service for a component, you have to create a file (with
384`service` in the name.
385For normal service, just put the following line:
386
387.. code-block:: yaml
388
389 {{ include "common.service" . }}
390
391For headless service, the line to put is the following:
392
393.. code-block:: yaml
394
395 {{ include "common.headlessService" . }}
396
397The configuration of the service is done in component `values.yaml`:
398
399.. code-block:: yaml
400
401 service:
402 name: NAME-OF-THE-SERVICE
403 postfix: MY-POSTFIX
404 type: NodePort
405 annotations:
406 someAnnotationsKey: value
407 ports:
408 - name: tcp-MyPort
409 port: 5432
410 nodePort: 88
411 - name: http-api
412 port: 8080
413 nodePort: 89
414 - name: https-api
415 port: 9443
416 nodePort: 90
417
418`annotations` and `postfix` keys are optional.
419if `service.type` is `NodePort`, then you have to give `nodePort` value for your
420service ports (which is the end of the computed nodePort, see example).
421
422It would render the following Service Resource (for a component named
423`name-of-my-component`, with version `x.y.z`, helm deployment name
424`my-deployment` and `global.nodePortPrefix` `302`):
425
426.. code-block:: yaml
427
428 apiVersion: v1
429 kind: Service
430 metadata:
431 annotations:
432 someAnnotationsKey: value
433 name: NAME-OF-THE-SERVICE-MY-POSTFIX
434 labels:
435 app.kubernetes.io/name: name-of-my-component
436 helm.sh/chart: name-of-my-component-x.y.z
437 app.kubernetes.io/instance: my-deployment-name-of-my-component
438 app.kubernetes.io/managed-by: Tiller
439 spec:
440 ports:
441 - port: 5432
442 targetPort: tcp-MyPort
443 nodePort: 30288
444 - port: 8080
445 targetPort: http-api
446 nodePort: 30289
447 - port: 9443
448 targetPort: https-api
449 nodePort: 30290
450 selector:
451 app.kubernetes.io/name: name-of-my-component
452 app.kubernetes.io/instance: my-deployment-name-of-my-component
453 type: NodePort
454
Eric Debeau993b77b2020-08-19 15:30:00 +0200455In the deployment or statefulSet file, you needs to set the good labels in
456order for the service to match the pods.
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100457
Eric Debeau993b77b2020-08-19 15:30:00 +0200458here's an example to be sure it matches (for a statefulSet):
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100459
460.. code-block:: yaml
461
462 apiVersion: apps/v1
463 kind: StatefulSet
464 metadata:
465 name: {{ include "common.fullname" . }}
466 namespace: {{ include "common.namespace" . }}
467 labels: {{- include "common.labels" . | nindent 4 }}
468 spec:
469 selector:
470 matchLabels: {{- include "common.matchLabels" . | nindent 6 }}
471 # serviceName is only needed for StatefulSet
472 # put the postfix part only if you have add a postfix on the service name
473 serviceName: {{ include "common.servicename" . }}-{{ .Values.service.postfix }}
474 <...>
475 template:
476 metadata:
477 labels: {{- include "common.labels" . | nindent 8 }}
478 annotations: {{- include "common.tplValue" (dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
479 name: {{ include "common.name" . }}
480 spec:
481 <...>
482 containers:
483 - name: {{ include "common.name" . }}
484 ports:
485 {{- range $index, $port := .Values.service.ports }}
486 - containerPort: {{ $port.port }}
487 name: {{ $port.name }}
488 {{- end }}
489 {{- range $index, $port := .Values.service.headlessPorts }}
490 - containerPort: {{ $port.port }}
491 name: {{ $port.name }}
492 {{- end }}
493 <...>
494
495The configuration of the service is done in component `values.yaml`:
496
497.. code-block:: yaml
498
499 service:
500 name: NAME-OF-THE-SERVICE
501 headless:
502 postfix: NONE
503 annotations:
504 anotherAnnotationsKey : value
505 publishNotReadyAddresses: true
506 headlessPorts:
507 - name: tcp-MyPort
508 port: 5432
509 - name: http-api
510 port: 8080
511 - name: https-api
512 port: 9443
513
514`headless.annotations`, `headless.postfix` and
515`headless.publishNotReadyAddresses` keys are optional.
516
517If `headless.postfix` is not set, then we'll add `-headless` at the end of the
518service name.
519
520If it set to `NONE`, there will be not postfix.
521
522And if set to something, it will add `-something` at the end of the service
523name.
524
525It would render the following Service Resource (for a component named
526`name-of-my-component`, with version `x.y.z`, helm deployment name
527`my-deployment` and `global.nodePortPrefix` `302`):
528
529.. code-block:: yaml
530
531 apiVersion: v1
532 kind: Service
533 metadata:
534 annotations:
535 anotherAnnotationsKey: value
536 name: NAME-OF-THE-SERVICE
537 labels:
538 app.kubernetes.io/name: name-of-my-component
539 helm.sh/chart: name-of-my-component-x.y.z
540 app.kubernetes.io/instance: my-deployment-name-of-my-component
541 app.kubernetes.io/managed-by: Tiller
542 spec:
543 clusterIP: None
544 ports:
545 - port: 5432
546 targetPort: tcp-MyPort
547 nodePort: 30288
548 - port: 8080
549 targetPort: http-api
550 nodePort: 30289
551 - port: 9443
552 targetPort: https-api
553 nodePort: 30290
554 publishNotReadyAddresses: true
555 selector:
556 app.kubernetes.io/name: name-of-my-component
557 app.kubernetes.io/instance: my-deployment-name-of-my-component
558 type: ClusterIP
559
560Previous example of StatefulSet would also match (except for the `postfix` part
561obviously).
562
563Creating Deployment or StatefulSet
564----------------------------------
565
566Deployment and StatefulSet should use the `apps/v1` (which has appeared in
567v1.9).
568As seen on the service part, the following parts are mandatory:
569
570.. code-block:: yaml
571
572 apiVersion: apps/v1
573 kind: StatefulSet
574 metadata:
575 name: {{ include "common.fullname" . }}
576 namespace: {{ include "common.namespace" . }}
577 labels: {{- include "common.labels" . | nindent 4 }}
578 spec:
579 selector:
580 matchLabels: {{- include "common.matchLabels" . | nindent 6 }}
581 # serviceName is only needed for StatefulSet
582 # put the postfix part only if you have add a postfix on the service name
583 serviceName: {{ include "common.servicename" . }}-{{ .Values.service.postfix }}
584 <...>
585 template:
586 metadata:
587 labels: {{- include "common.labels" . | nindent 8 }}
588 annotations: {{- include "common.tplValue" (dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
589 name: {{ include "common.name" . }}
590 spec:
591 <...>
592 containers:
593 - name: {{ include "common.name" . }}
Roger Maitland953b5f12018-03-22 15:24:04 -0400594
595ONAP Application Configuration
596------------------------------
597
Roger Maitlandac643812018-03-28 09:52:34 -0400598Dependency Management
599---------------------
600These Helm charts describe the desired state
601of an ONAP deployment and instruct the Kubernetes container manager as to how
602to maintain the deployment in this state. These dependencies dictate the order
603in-which the containers are started for the first time such that such
604dependencies are always met without arbitrary sleep times between container
605startups. For example, the SDC back-end container requires the Elastic-Search,
606Cassandra and Kibana containers within SDC to be ready and is also dependent on
607DMaaP (or the message-router) to be ready - where ready implies the built-in
608"readiness" probes succeeded - before becoming fully operational. When an
609initial deployment of ONAP is requested the current state of the system is NULL
610so ONAP is deployed by the Kubernetes manager as a set of Docker containers on
611one or more predetermined hosts. The hosts could be physical machines or
612virtual machines. When deploying on virtual machines the resulting system will
613be very similar to "Heat" based deployments, i.e. Docker containers running
614within a set of VMs, the primary difference being that the allocation of
615containers to VMs is done dynamically with OOM and statically with "Heat".
616Example SO deployment descriptor file shows SO's dependency on its mariadb
617data-base component:
618
619SO deployment specification excerpt:
620
621.. code-block:: yaml
622
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100623 apiVersion: apps/v1
Roger Maitlandac643812018-03-28 09:52:34 -0400624 kind: Deployment
625 metadata:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100626 name: {{ include "common.fullname" . }}
Roger Maitlandac643812018-03-28 09:52:34 -0400627 namespace: {{ include "common.namespace" . }}
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100628 labels: {{- include "common.labels" . | nindent 4 }}
Roger Maitlandac643812018-03-28 09:52:34 -0400629 spec:
630 replicas: {{ .Values.replicaCount }}
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100631 selector:
632 matchLabels: {{- include "common.matchLabels" . | nindent 6 }}
Roger Maitlandac643812018-03-28 09:52:34 -0400633 template:
634 metadata:
635 labels:
636 app: {{ include "common.name" . }}
637 release: {{ .Release.Name }}
638 spec:
639 initContainers:
640 - command:
Sylvain Desbureaux1694e1d2020-08-21 09:58:25 +0200641 - /app/ready.py
Roger Maitlandac643812018-03-28 09:52:34 -0400642 args:
643 - --container-name
644 - so-mariadb
645 env:
646 ...
647
648Kubernetes Container Orchestration
649==================================
650The ONAP components are managed by the Kubernetes_ container management system
651which maintains the desired state of the container system as described by one
652or more deployment descriptors - similar in concept to OpenStack HEAT
653Orchestration Templates. The following sections describe the fundamental
654objects managed by Kubernetes, the network these components use to communicate
655with each other and other entities outside of ONAP and the templates that
656describe the configuration and desired state of the ONAP components.
657
658Name Spaces
659-----------
Sylvain Desbureaux5b7440b2019-01-28 16:49:14 +0100660Within the namespaces are Kubernetes services that provide external
661connectivity to pods that host Docker containers.
Roger Maitlandac643812018-03-28 09:52:34 -0400662
663ONAP Components to Kubernetes Object Relationships
664--------------------------------------------------
665Kubernetes deployments consist of multiple objects:
666
667- **nodes** - a worker machine - either physical or virtual - that hosts
668 multiple containers managed by Kubernetes.
669- **services** - an abstraction of a logical set of pods that provide a
670 micro-service.
671- **pods** - one or more (but typically one) container(s) that provide specific
672 application functionality.
673- **persistent volumes** - One or more permanent volumes need to be established
674 to hold non-ephemeral configuration and state data.
675
676The relationship between these objects is shown in the following figure:
677
678.. .. uml::
679..
680.. @startuml
681.. node PH {
682.. component Service {
683.. component Pod0
684.. component Pod1
685.. }
686.. }
687..
688.. database PV
689.. @enduml
690
efiacor0fb3b8f2022-10-28 15:29:26 +0100691.. figure:: ../../resources/images/k8s/kubernetes_objects.png
Roger Maitlandac643812018-03-28 09:52:34 -0400692
693OOM uses these Kubernetes objects as described in the following sections.
694
695Nodes
696~~~~~
697OOM works with both physical and virtual worker machines.
698
699* Virtual Machine Deployments - If ONAP is to be deployed onto a set of virtual
700 machines, the creation of the VMs is outside of the scope of OOM and could be
701 done in many ways, such as
702
703 * manually, for example by a user using the OpenStack Horizon dashboard or
704 AWS EC2, or
705 * automatically, for example with the use of a OpenStack Heat Orchestration
706 Template which builds an ONAP stack, Azure ARM template, AWS CloudFormation
707 Template, or
708 * orchestrated, for example with Cloudify creating the VMs from a TOSCA
709 template and controlling their life cycle for the life of the ONAP
710 deployment.
711
712* Physical Machine Deployments - If ONAP is to be deployed onto physical
713 machines there are several options but the recommendation is to use Rancher
714 along with Helm to associate hosts with a Kubernetes cluster.
715
716Pods
717~~~~
718A group of containers with shared storage and networking can be grouped
719together into a Kubernetes pod. All of the containers within a pod are
720co-located and co-scheduled so they operate as a single unit. Within ONAP
721Amsterdam release, pods are mapped one-to-one to docker containers although
722this may change in the future. As explained in the Services section below the
723use of Pods within each ONAP component is abstracted from other ONAP
724components.
725
726Services
727~~~~~~~~
728OOM uses the Kubernetes service abstraction to provide a consistent access
729point for each of the ONAP components independent of the pod or container
730architecture of that component. For example, the SDNC component may introduce
731OpenDaylight clustering as some point and change the number of pods in this
732component to three or more but this change will be isolated from the other ONAP
733components by the service abstraction. A service can include a load balancer
734on its ingress to distribute traffic between the pods and even react to dynamic
735changes in the number of pods if they are part of a replica set.
736
737Persistent Volumes
738~~~~~~~~~~~~~~~~~~
739To enable ONAP to be deployed into a wide variety of cloud infrastructures a
740flexible persistent storage architecture, built on Kubernetes persistent
741volumes, provides the ability to define the physical storage in a central
742location and have all ONAP components securely store their data.
743
744When deploying ONAP into a public cloud, available storage services such as
745`AWS Elastic Block Store`_, `Azure File`_, or `GCE Persistent Disk`_ are
746options. Alternatively, when deploying into a private cloud the storage
747architecture might consist of Fiber Channel, `Gluster FS`_, or iSCSI. Many
748other storage options existing, refer to the `Kubernetes Storage Class`_
749documentation for a full list of the options. The storage architecture may vary
750from deployment to deployment but in all cases a reliable, redundant storage
751system must be provided to ONAP with which the state information of all ONAP
752components will be securely stored. The Storage Class for a given deployment is
753a single parameter listed in the ONAP values.yaml file and therefore is easily
754customized. Operation of this storage system is outside the scope of the OOM.
755
756.. code-block:: yaml
757
758 Insert values.yaml code block with storage block here
759
760Once the storage class is selected and the physical storage is provided, the
761ONAP deployment step creates a pool of persistent volumes within the given
762physical storage that is used by all of the ONAP components. ONAP components
763simply make a claim on these persistent volumes (PV), with a persistent volume
764claim (PVC), to gain access to their storage.
765
766The following figure illustrates the relationships between the persistent
767volume claims, the persistent volumes, the storage class, and the physical
768storage.
769
770.. graphviz::
771
772 digraph PV {
773 label = "Persistance Volume Claim to Physical Storage Mapping"
774 {
775 node [shape=cylinder]
776 D0 [label="Drive0"]
777 D1 [label="Drive1"]
778 Dx [label="Drivex"]
779 }
780 {
781 node [shape=Mrecord label="StorageClass:ceph"]
782 sc
783 }
784 {
785 node [shape=point]
786 p0 p1 p2
787 p3 p4 p5
788 }
789 subgraph clusterSDC {
790 label="SDC"
791 PVC0
792 PVC1
793 }
794 subgraph clusterSDNC {
795 label="SDNC"
796 PVC2
797 }
798 subgraph clusterSO {
799 label="SO"
800 PVCn
801 }
802 PV0 -> sc
803 PV1 -> sc
804 PV2 -> sc
805 PVn -> sc
806
807 sc -> {D0 D1 Dx}
808 PVC0 -> PV0
809 PVC1 -> PV1
810 PVC2 -> PV2
811 PVCn -> PVn
812
813 # force all of these nodes to the same line in the given order
814 subgraph {
815 rank = same; PV0;PV1;PV2;PVn;p0;p1;p2
816 PV0->PV1->PV2->p0->p1->p2->PVn [style=invis]
817 }
818
819 subgraph {
820 rank = same; D0;D1;Dx;p3;p4;p5
821 D0->D1->p3->p4->p5->Dx [style=invis]
822 }
823
824 }
825
826In-order for an ONAP component to use a persistent volume it must make a claim
827against a specific persistent volume defined in the ONAP common charts. Note
828that there is a one-to-one relationship between a PVC and PV. The following is
829an excerpt from a component chart that defines a PVC:
830
831.. code-block:: yaml
832
833 Insert PVC example here
834
835OOM Networking with Kubernetes
836------------------------------
837
838- DNS
Sylvain Desbureaux5b7440b2019-01-28 16:49:14 +0100839- Ports - Flattening the containers also expose port conflicts between the
840 containers which need to be resolved.
Roger Maitlandac643812018-03-28 09:52:34 -0400841
842Node Ports
843~~~~~~~~~~
844
845Pod Placement Rules
846-------------------
847OOM will use the rich set of Kubernetes node and pod affinity /
848anti-affinity rules to minimize the chance of a single failure resulting in a
849loss of ONAP service. Node affinity / anti-affinity is used to guide the
850Kubernetes orchestrator in the placement of pods on nodes (physical or virtual
851machines). For example:
852
853- if a container used Intel DPDK technology the pod may state that it as
854 affinity to an Intel processor based node, or
855- geographical based node labels (such as the Kubernetes standard zone or
856 region labels) may be used to ensure placement of a DCAE complex close to the
857 VNFs generating high volumes of traffic thus minimizing networking cost.
858 Specifically, if nodes were pre-assigned labels East and West, the pod
859 deployment spec to distribute pods to these nodes would be:
860
861.. code-block:: yaml
862
863 nodeSelector:
864 failure-domain.beta.Kubernetes.io/region: {{ .Values.location }}
865
866- "location: West" is specified in the `values.yaml` file used to deploy
867 one DCAE cluster and "location: East" is specified in a second `values.yaml`
868 file (see OOM Configuration Management for more information about
869 configuration files like the `values.yaml` file).
870
871Node affinity can also be used to achieve geographic redundancy if pods are
872assigned to multiple failure domains. For more information refer to `Assigning
873Pods to Nodes`_.
874
875.. note::
876 One could use Pod to Node assignment to totally constrain Kubernetes when
877 doing initial container assignment to replicate the Amsterdam release
878 OpenStack Heat based deployment. Should one wish to do this, each VM would
879 need a unique node name which would be used to specify a node constaint
880 for every component. These assignment could be specified in an environment
881 specific values.yaml file. Constraining Kubernetes in this way is not
882 recommended.
883
884Kubernetes has a comprehensive system called Taints and Tolerations that can be
885used to force the container orchestrator to repel pods from nodes based on
886static events (an administrator assigning a taint to a node) or dynamic events
887(such as a node becoming unreachable or running out of disk space). There are
888no plans to use taints or tolerations in the ONAP Beijing release. Pod
889affinity / anti-affinity is the concept of creating a spacial relationship
890between pods when the Kubernetes orchestrator does assignment (both initially
891an in operation) to nodes as explained in Inter-pod affinity and anti-affinity.
892For example, one might choose to co-located all of the ONAP SDC containers on a
893single node as they are not critical runtime components and co-location
894minimizes overhead. On the other hand, one might choose to ensure that all of
895the containers in an ODL cluster (SDNC and APPC) are placed on separate nodes
896such that a node failure has minimal impact to the operation of the cluster.
897An example of how pod affinity / anti-affinity is shown below:
898
899Pod Affinity / Anti-Affinity
900
901.. code-block:: yaml
902
903 apiVersion: v1
904 kind: Pod
905 metadata:
906 name: with-pod-affinity
907 spec:
908 affinity:
909 podAffinity:
910 requiredDuringSchedulingIgnoredDuringExecution:
911 - labelSelector:
912 matchExpressions:
913 - key: security
914 operator: In
915 values:
916 - S1
917 topologyKey: failure-domain.beta.Kubernetes.io/zone
918 podAntiAffinity:
919 preferredDuringSchedulingIgnoredDuringExecution:
920 - weight: 100
921 podAffinityTerm:
922 labelSelector:
923 matchExpressions:
924 - key: security
925 operator: In
926 values:
927 - S2
928 topologyKey: Kubernetes.io/hostname
929 containers:
930 - name: with-pod-affinity
931 image: gcr.io/google_containers/pause:2.0
932
933This example contains both podAffinity and podAntiAffinity rules, the first
934rule is is a must (requiredDuringSchedulingIgnoredDuringExecution) while the
935second will be met pending other considerations
936(preferredDuringSchedulingIgnoredDuringExecution). Preemption Another feature
937that may assist in achieving a repeatable deployment in the presence of faults
938that may have reduced the capacity of the cloud is assigning priority to the
939containers such that mission critical components have the ability to evict less
940critical components. Kubernetes provides this capability with Pod Priority and
941Preemption. Prior to having more advanced production grade features available,
942the ability to at least be able to re-deploy ONAP (or a subset of) reliably
943provides a level of confidence that should an outage occur the system can be
944brought back on-line predictably.
945
946Health Checks
947-------------
948
949Monitoring of ONAP components is configured in the agents within JSON files and
950stored in gerrit under the consul-agent-config, here is an example from the AAI
951model loader (aai-model-loader-health.json):
952
953.. code-block:: json
954
955 {
956 "service": {
957 "name": "A&AI Model Loader",
958 "checks": [
959 {
960 "id": "model-loader-process",
961 "name": "Model Loader Presence",
962 "script": "/consul/config/scripts/model-loader-script.sh",
963 "interval": "15s",
964 "timeout": "1s"
965 }
966 ]
967 }
968 }
969
970Liveness Probes
971---------------
972
973These liveness probes can simply check that a port is available, that a
974built-in health check is reporting good health, or that the Consul health check
975is positive. For example, to monitor the SDNC component has following liveness
976probe can be found in the SDNC DB deployment specification:
977
978.. code-block:: yaml
979
980 sdnc db liveness probe
981
982 livenessProbe:
983 exec:
984 command: ["mysqladmin", "ping"]
985 initialDelaySeconds: 30 periodSeconds: 10
986 timeoutSeconds: 5
987
988The 'initialDelaySeconds' control the period of time between the readiness
989probe succeeding and the liveness probe starting. 'periodSeconds' and
990'timeoutSeconds' control the actual operation of the probe. Note that
991containers are inherently ephemeral so the healing action destroys failed
992containers and any state information within it. To avoid a loss of state, a
993persistent volume should be used to store all data that needs to be persisted
994over the re-creation of a container. Persistent volumes have been created for
995the database components of each of the projects and the same technique can be
996used for all persistent state information.