blob: fccf453925dd45ca0aa40c046a9caf45899e00c1 [file] [log] [blame]
Eric Debeau993b77b2020-08-19 15:30:00 +02001.. This work is licensed under a Creative Commons Attribution 4.0
2.. International License.
Roger Maitland953b5f12018-03-22 15:24:04 -04003.. http://creativecommons.org/licenses/by/4.0
Eric Debeau993b77b2020-08-19 15:30:00 +02004.. Copyright 2018-2020 Amdocs, Bell Canada, Orange, Samsung
Roger Maitland953b5f12018-03-22 15:24:04 -04005
6.. Links
7.. _Helm: https://docs.helm.sh/
Roger Maitlandac643812018-03-28 09:52:34 -04008.. _Helm Charts: https://github.com/kubernetes/charts
Roger Maitland953b5f12018-03-22 15:24:04 -04009.. _Kubernetes: https://Kubernetes.io/
10.. _Docker: https://www.docker.com/
11.. _Nexus: https://nexus.onap.org/#welcome
12.. _AWS Elastic Block Store: https://aws.amazon.com/ebs/
13.. _Azure File: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction
14.. _GCE Persistent Disk: https://cloud.google.com/compute/docs/disks/
15.. _Gluster FS: https://www.gluster.org/
16.. _Kubernetes Storage Class: https://Kubernetes.io/docs/concepts/storage/storage-classes/
17.. _Assigning Pods to Nodes: https://Kubernetes.io/docs/concepts/configuration/assign-pod-node/
18
Roger Maitlandac643812018-03-28 09:52:34 -040019
Roger Maitland953b5f12018-03-22 15:24:04 -040020.. _developer-guide-label:
21
22OOM Developer Guide
23###################
24
25.. figure:: oomLogoV2-medium.png
26 :align: right
27
28ONAP consists of a large number of components, each of which are substantial
29projects within themselves, which results in a high degree of complexity in
30deployment and management. To cope with this complexity the ONAP Operations
31Manager (OOM) uses a Helm_ model of ONAP - Helm being the primary management
32system for Kubernetes_ container systems - to drive all user driven life-cycle
33management operations. The Helm model of ONAP is composed of a set of
34hierarchical Helm charts that define the structure of the ONAP components and
35the configuration of these components. These charts are fully parameterized
36such that a single environment file defines all of the parameters needed to
37deploy ONAP. A user of ONAP may maintain several such environment files to
38control the deployment of ONAP in multiple environments such as development,
39pre-production, and production.
40
41The following sections describe how the ONAP Helm charts are constructed.
42
43.. contents::
44 :depth: 3
45 :local:
46..
47
48Container Background
49====================
50Linux containers allow for an application and all of its operating system
51dependencies to be packaged and deployed as a single unit without including a
52guest operating system as done with virtual machines. The most popular
53container solution is Docker_ which provides tools for container management
54like the Docker Host (dockerd) which can create, run, stop, move, or delete a
55container. Docker has a very popular registry of containers images that can be
56used by any Docker system; however, in the ONAP context, Docker images are
57built by the standard CI/CD flow and stored in Nexus_ repositories. OOM uses
58the "standard" ONAP docker containers and three new ones specifically created
59for OOM.
60
61Containers are isolated from each other primarily via name spaces within the
62Linux kernel without the need for multiple guest operating systems. As such,
63multiple containers can be deployed with little overhead such as all of ONAP
64can be deployed on a single host. With some optimization of the ONAP components
65(e.g. elimination of redundant database instances) it may be possible to deploy
66ONAP on a single laptop computer.
67
68Helm Charts
69===========
Roger Maitlandac643812018-03-28 09:52:34 -040070A Helm chart is a collection of files that describe a related set of Kubernetes
71resources. A simple chart might be used to deploy something simple, like a
72memcached pod, while a complex chart might contain many micro-service arranged
73in a hierarchy as found in the `aai` ONAP component.
Roger Maitland953b5f12018-03-22 15:24:04 -040074
Roger Maitlandac643812018-03-28 09:52:34 -040075Charts are created as files laid out in a particular directory tree, then they
76can be packaged into versioned archives to be deployed. There is a public
77archive of `Helm Charts`_ on GitHub that includes many technologies applicable
78to ONAP. Some of these charts have been used in ONAP and all of the ONAP charts
79have been created following the guidelines provided.
Roger Maitland953b5f12018-03-22 15:24:04 -040080
Roger Maitlandac643812018-03-28 09:52:34 -040081The top level of the ONAP charts is shown below:
Roger Maitland953b5f12018-03-22 15:24:04 -040082
Sylvain Desbureaux60c74802019-12-12 14:35:01 +010083.. code-block:: bash
Roger Maitland953b5f12018-03-22 15:24:04 -040084
Sylvain Desbureaux60c74802019-12-12 14:35:01 +010085 common
86 ├── cassandra
87 │   ├── Chart.yaml
88 │   ├── requirements.yaml
89 │   ├── resources
90 │   │   ├── config
91 │   │   │   └── docker-entrypoint.sh
92 │   │   ├── exec.py
93 │   │   └── restore.sh
94 │   ├── templates
95 │   │   ├── backup
96 │   │   │   ├── configmap.yaml
97 │   │   │   ├── cronjob.yaml
98 │   │   │   ├── pv.yaml
99 │   │   │   └── pvc.yaml
100 │   │   ├── configmap.yaml
101 │   │   ├── pv.yaml
102 │   │   ├── service.yaml
103 │   │   └── statefulset.yaml
104 │   └── values.yaml
105 ├── common
106 │   ├── Chart.yaml
107 │   ├── templates
108 │   │   ├── _createPassword.tpl
109 │   │   ├── _ingress.tpl
110 │   │   ├── _labels.tpl
111 │   │   ├── _mariadb.tpl
112 │   │   ├── _name.tpl
113 │   │   ├── _namespace.tpl
114 │   │   ├── _repository.tpl
115 │   │   ├── _resources.tpl
116 │   │   ├── _secret.yaml
117 │   │   ├── _service.tpl
118 │   │   ├── _storage.tpl
119 │   │   └── _tplValue.tpl
120 │   └── values.yaml
121 ├── ...
122 └── postgres-legacy
123    ├── Chart.yaml
124   ├── requirements.yaml
125 ├── charts
126 └── configs
Roger Maitland953b5f12018-03-22 15:24:04 -0400127
Roger Maitlandac643812018-03-28 09:52:34 -0400128The common section of charts consists of a set of templates that assist with
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100129parameter substitution (`_name.tpl`, `_namespace.tpl` and others) and a set of charts
130for components used throughout ONAP. When the common components are used by other charts they
131are instantiated each time or we can deploy a shared instances for several components.
Roger Maitland953b5f12018-03-22 15:24:04 -0400132
Roger Maitlandac643812018-03-28 09:52:34 -0400133All of the ONAP components have charts that follow the pattern shown below:
Roger Maitland953b5f12018-03-22 15:24:04 -0400134
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100135.. code-block:: bash
Roger Maitland953b5f12018-03-22 15:24:04 -0400136
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100137 name-of-my-component
138 ├── Chart.yaml
139 ├── requirements.yaml
140 ├── component
141 │   └── subcomponent-folder
142 ├── charts
143 │   └── subchart-folder
144 ├── resources
145 │   ├── folder1
146 │   │   ├── file1
147 │   │   └── file2
148 │   └── folder1
149 │   ├── file3
150 │   └── folder3
151 │      └── file4
152 ├── templates
153 │   ├── NOTES.txt
154 │   ├── configmap.yaml
155 │   ├── deployment.yaml
156 │   ├── ingress.yaml
157 │   ├── job.yaml
158 │   ├── secrets.yaml
159 │   └── service.yaml
160 └── values.yaml
Roger Maitland953b5f12018-03-22 15:24:04 -0400161
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100162Note that the component charts / components may include a hierarchy of sub
163components and in themselves can be quite complex.
Roger Maitland953b5f12018-03-22 15:24:04 -0400164
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100165You can use either `charts` or `components` folder for your subcomponents.
166`charts` folder means that the subcomponent will always been deployed.
Roger Maitland953b5f12018-03-22 15:24:04 -0400167
Eric Debeau993b77b2020-08-19 15:30:00 +0200168`components` folders means we can choose if we want to deploy the
169subcomponent.
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100170
171This choice is done in root `values.yaml`:
172
173.. code-block:: yaml
174
175 ---
176 global:
177 key: value
178
179 component1:
180 enabled: true
181 component2:
182 enabled: true
183
184Then in `requirements.yaml`, you'll use these values:
185
186.. code-block:: yaml
187
188 ---
189 dependencies:
190 - name: common
191 version: ~x.y-0
192 repository: '@local'
193 - name: component1
194 version: ~x.y-0
195 repository: 'file://components/component1'
196 condition: component1.enabled
197 - name: component2
198 version: ~x.y-0
199 repository: 'file://components/component2'
200 condition: component2.enabled
Roger Maitland953b5f12018-03-22 15:24:04 -0400201
Roger Maitlandac643812018-03-28 09:52:34 -0400202Configuration of the components varies somewhat from component to component but
203generally follows the pattern of one or more `configmap.yaml` files which can
204directly provide configuration to the containers in addition to processing
205configuration files stored in the `config` directory. It is the responsibility
206of each ONAP component team to update these configuration files when changes
207are made to the project containers that impact configuration.
Roger Maitland953b5f12018-03-22 15:24:04 -0400208
Sylvain Desbureaux5b7440b2019-01-28 16:49:14 +0100209The following section describes how the hierarchical ONAP configuration system
210is key to management of such a large system.
Roger Maitland953b5f12018-03-22 15:24:04 -0400211
212Configuration Management
213========================
214
215ONAP is a large system composed of many components - each of which are complex
216systems in themselves - that needs to be deployed in a number of different
217ways. For example, within a single operator's network there may be R&D
218deployments under active development, pre-production versions undergoing system
219testing and production systems that are operating live networks. Each of these
220deployments will differ in significant ways, such as the version of the
221software images deployed. In addition, there may be a number of application
222specific configuration differences, such as operating system environment
223variables. The following describes how the Helm configuration management
224system is used within the OOM project to manage both ONAP infrastructure
225configuration as well as ONAP components configuration.
226
227One of the artifacts that OOM/Kubernetes uses to deploy ONAP components is the
228deployment specification, yet another yaml file. Within these deployment specs
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100229are a number of parameters as shown in the following example:
Roger Maitland953b5f12018-03-22 15:24:04 -0400230
231.. code-block:: yaml
232
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100233 apiVersion: apps/v1
234 kind: StatefulSet
Roger Maitland953b5f12018-03-22 15:24:04 -0400235 metadata:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100236 labels:
237 app.kubernetes.io/name: zookeeper
238 helm.sh/chart: zookeeper
239 app.kubernetes.io/component: server
240 app.kubernetes.io/managed-by: Tiller
241 app.kubernetes.io/instance: onap-oof
242 name: onap-oof-zookeeper
243 namespace: onap
Roger Maitland953b5f12018-03-22 15:24:04 -0400244 spec:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100245 <...>
246 replicas: 3
247 selector:
248 matchLabels:
249 app.kubernetes.io/name: zookeeper
250 app.kubernetes.io/component: server
251 app.kubernetes.io/instance: onap-oof
252 serviceName: onap-oof-zookeeper-headless
Roger Maitland953b5f12018-03-22 15:24:04 -0400253 template:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100254 metadata:
255 labels:
256 app.kubernetes.io/name: zookeeper
257 helm.sh/chart: zookeeper
258 app.kubernetes.io/component: server
259 app.kubernetes.io/managed-by: Tiller
260 app.kubernetes.io/instance: onap-oof
Roger Maitland953b5f12018-03-22 15:24:04 -0400261 spec:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100262 <...>
263 affinity:
Roger Maitland953b5f12018-03-22 15:24:04 -0400264 containers:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100265 - name: zookeeper
Roger Maitland953b5f12018-03-22 15:24:04 -0400266 <...>
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100267 image: gcr.io/google_samples/k8szk:v3
268 imagePullPolicy: Always
269 <...>
270 ports:
271 - containerPort: 2181
272 name: client
273 protocol: TCP
274 - containerPort: 3888
275 name: election
276 protocol: TCP
277 - containerPort: 2888
278 name: server
279 protocol: TCP
280 <...>
Roger Maitland953b5f12018-03-22 15:24:04 -0400281
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100282Note that within the statefulset specification, one of the container arguments
283is the key/value pair image: gcr.io/google_samples/k8szk:v3 which
284specifies the version of the zookeeper software to deploy. Although the
285statefulset specifications greatly simplify statefulset, maintenance of the
286statefulset specifications themselves become problematic as software versions
Roger Maitland953b5f12018-03-22 15:24:04 -0400287change over time or as different versions are required for different
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100288statefulsets. For example, if the R&D team needs to deploy a newer version of
Roger Maitland953b5f12018-03-22 15:24:04 -0400289mariadb than what is currently used in the production environment, they would
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100290need to clone the statefulset specification and change this value. Fortunately,
Roger Maitland953b5f12018-03-22 15:24:04 -0400291this problem has been solved with the templating capabilities of Helm.
292
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100293The following example shows how the statefulset specifications are modified to
Roger Maitland953b5f12018-03-22 15:24:04 -0400294incorporate Helm templates such that key/value pairs can be defined outside of
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100295the statefulset specifications and passed during instantiation of the component.
Roger Maitland953b5f12018-03-22 15:24:04 -0400296
297.. code-block:: yaml
298
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100299 apiVersion: apps/v1
300 kind: StatefulSet
Roger Maitland953b5f12018-03-22 15:24:04 -0400301 metadata:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100302 name: {{ include "common.fullname" . }}
303 namespace: {{ include "common.namespace" . }}
304 labels: {{- include "common.labels" . | nindent 4 }}
Roger Maitland953b5f12018-03-22 15:24:04 -0400305 spec:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100306 replicas: {{ .Values.replicaCount }}
307 selector:
308 matchLabels: {{- include "common.matchLabels" . | nindent 6 }}
309 # serviceName is only needed for StatefulSet
310 # put the postfix part only if you have add a postfix on the service name
311 serviceName: {{ include "common.servicename" . }}-{{ .Values.service.postfix }}
Roger Maitland953b5f12018-03-22 15:24:04 -0400312 <...>
313 template:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100314 metadata:
315 labels: {{- include "common.labels" . | nindent 8 }}
316 annotations: {{- include "common.tplValue" (dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
317 name: {{ include "common.name" . }}
Roger Maitland953b5f12018-03-22 15:24:04 -0400318 spec:
Roger Maitland953b5f12018-03-22 15:24:04 -0400319 <...>
Roger Maitland953b5f12018-03-22 15:24:04 -0400320 containers:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100321 - name: {{ include "common.name" . }}
322 image: {{ .Values.image }}
323 imagePullPolicy: {{ .Values.global.pullPolicy | default .Values.pullPolicy }}
324 ports:
325 {{- range $index, $port := .Values.service.ports }}
326 - containerPort: {{ $port.port }}
327 name: {{ $port.name }}
328 {{- end }}
329 {{- range $index, $port := .Values.service.headlessPorts }}
330 - containerPort: {{ $port.port }}
331 name: {{ $port.name }}
332 {{- end }}
333 <...>
Roger Maitland953b5f12018-03-22 15:24:04 -0400334
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100335This version of the statefulset specification has gone through the process of
336templating values that are likely to change between statefulsets. Note that the
337image is now specified as: image: {{ .Values.image }} instead of a
338string used previously. During the statefulset phase, Helm (actually the Helm
Roger Maitland953b5f12018-03-22 15:24:04 -0400339sub-component Tiller) substitutes the {{ .. }} entries with a variable defined
340in a values.yaml file. The content of this file is as follows:
341
342.. code-block:: yaml
343
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100344 <...>
345 image: gcr.io/google_samples/k8szk:v3
346 replicaCount: 3
347 <...>
Roger Maitland953b5f12018-03-22 15:24:04 -0400348
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100349
350Within the values.yaml file there is an image key with the value
351`gcr.io/google_samples/k8szk:v3` which is the same value used in
Roger Maitland953b5f12018-03-22 15:24:04 -0400352the non-templated version. Once all of the substitutions are complete, the
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100353resulting statefulset specification ready to be used by Kubernetes.
Roger Maitland953b5f12018-03-22 15:24:04 -0400354
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100355When creating a template consider the use of default values if appropriate.
356Helm templating has built in support for DEFAULT values, here is
Roger Maitland953b5f12018-03-22 15:24:04 -0400357an example:
358
359.. code-block:: yaml
360
361 imagePullSecrets:
362 - name: "{{ .Values.nsPrefix | default "onap" }}-docker-registry-key"
363
364The pipeline operator ("|") used here hints at that power of Helm templates in
365that much like an operating system command line the pipeline operator allow
366over 60 Helm functions to be embedded directly into the template (note that the
367Helm template language is a superset of the Go template language). These
368functions include simple string operations like upper and more complex flow
369control operations like if/else.
370
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100371OOM is mainly helm templating. In order to have consistent deployment of the
372different components of ONAP, some rules must be followed.
373
374Templates are provided in order to create Kubernetes resources (Secrets,
375Ingress, Services, ...) or part of Kubernetes resources (names, labels,
376resources requests and limits, ...).
377
Sylvain Desbureaux88b2f922020-03-04 11:31:11 +0100378a full list and simple description is done in
379`kubernetes/common/common/documentation.rst`.
380
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100381Service template
382----------------
383
384In order to create a Service for a component, you have to create a file (with
385`service` in the name.
386For normal service, just put the following line:
387
388.. code-block:: yaml
389
390 {{ include "common.service" . }}
391
392For headless service, the line to put is the following:
393
394.. code-block:: yaml
395
396 {{ include "common.headlessService" . }}
397
398The configuration of the service is done in component `values.yaml`:
399
400.. code-block:: yaml
401
402 service:
403 name: NAME-OF-THE-SERVICE
404 postfix: MY-POSTFIX
405 type: NodePort
406 annotations:
407 someAnnotationsKey: value
408 ports:
409 - name: tcp-MyPort
410 port: 5432
411 nodePort: 88
412 - name: http-api
413 port: 8080
414 nodePort: 89
415 - name: https-api
416 port: 9443
417 nodePort: 90
418
419`annotations` and `postfix` keys are optional.
420if `service.type` is `NodePort`, then you have to give `nodePort` value for your
421service ports (which is the end of the computed nodePort, see example).
422
423It would render the following Service Resource (for a component named
424`name-of-my-component`, with version `x.y.z`, helm deployment name
425`my-deployment` and `global.nodePortPrefix` `302`):
426
427.. code-block:: yaml
428
429 apiVersion: v1
430 kind: Service
431 metadata:
432 annotations:
433 someAnnotationsKey: value
434 name: NAME-OF-THE-SERVICE-MY-POSTFIX
435 labels:
436 app.kubernetes.io/name: name-of-my-component
437 helm.sh/chart: name-of-my-component-x.y.z
438 app.kubernetes.io/instance: my-deployment-name-of-my-component
439 app.kubernetes.io/managed-by: Tiller
440 spec:
441 ports:
442 - port: 5432
443 targetPort: tcp-MyPort
444 nodePort: 30288
445 - port: 8080
446 targetPort: http-api
447 nodePort: 30289
448 - port: 9443
449 targetPort: https-api
450 nodePort: 30290
451 selector:
452 app.kubernetes.io/name: name-of-my-component
453 app.kubernetes.io/instance: my-deployment-name-of-my-component
454 type: NodePort
455
Eric Debeau993b77b2020-08-19 15:30:00 +0200456In the deployment or statefulSet file, you needs to set the good labels in
457order for the service to match the pods.
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100458
Eric Debeau993b77b2020-08-19 15:30:00 +0200459here's an example to be sure it matches (for a statefulSet):
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100460
461.. code-block:: yaml
462
463 apiVersion: apps/v1
464 kind: StatefulSet
465 metadata:
466 name: {{ include "common.fullname" . }}
467 namespace: {{ include "common.namespace" . }}
468 labels: {{- include "common.labels" . | nindent 4 }}
469 spec:
470 selector:
471 matchLabels: {{- include "common.matchLabels" . | nindent 6 }}
472 # serviceName is only needed for StatefulSet
473 # put the postfix part only if you have add a postfix on the service name
474 serviceName: {{ include "common.servicename" . }}-{{ .Values.service.postfix }}
475 <...>
476 template:
477 metadata:
478 labels: {{- include "common.labels" . | nindent 8 }}
479 annotations: {{- include "common.tplValue" (dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
480 name: {{ include "common.name" . }}
481 spec:
482 <...>
483 containers:
484 - name: {{ include "common.name" . }}
485 ports:
486 {{- range $index, $port := .Values.service.ports }}
487 - containerPort: {{ $port.port }}
488 name: {{ $port.name }}
489 {{- end }}
490 {{- range $index, $port := .Values.service.headlessPorts }}
491 - containerPort: {{ $port.port }}
492 name: {{ $port.name }}
493 {{- end }}
494 <...>
495
496The configuration of the service is done in component `values.yaml`:
497
498.. code-block:: yaml
499
500 service:
501 name: NAME-OF-THE-SERVICE
502 headless:
503 postfix: NONE
504 annotations:
505 anotherAnnotationsKey : value
506 publishNotReadyAddresses: true
507 headlessPorts:
508 - name: tcp-MyPort
509 port: 5432
510 - name: http-api
511 port: 8080
512 - name: https-api
513 port: 9443
514
515`headless.annotations`, `headless.postfix` and
516`headless.publishNotReadyAddresses` keys are optional.
517
518If `headless.postfix` is not set, then we'll add `-headless` at the end of the
519service name.
520
521If it set to `NONE`, there will be not postfix.
522
523And if set to something, it will add `-something` at the end of the service
524name.
525
526It would render the following Service Resource (for a component named
527`name-of-my-component`, with version `x.y.z`, helm deployment name
528`my-deployment` and `global.nodePortPrefix` `302`):
529
530.. code-block:: yaml
531
532 apiVersion: v1
533 kind: Service
534 metadata:
535 annotations:
536 anotherAnnotationsKey: value
537 name: NAME-OF-THE-SERVICE
538 labels:
539 app.kubernetes.io/name: name-of-my-component
540 helm.sh/chart: name-of-my-component-x.y.z
541 app.kubernetes.io/instance: my-deployment-name-of-my-component
542 app.kubernetes.io/managed-by: Tiller
543 spec:
544 clusterIP: None
545 ports:
546 - port: 5432
547 targetPort: tcp-MyPort
548 nodePort: 30288
549 - port: 8080
550 targetPort: http-api
551 nodePort: 30289
552 - port: 9443
553 targetPort: https-api
554 nodePort: 30290
555 publishNotReadyAddresses: true
556 selector:
557 app.kubernetes.io/name: name-of-my-component
558 app.kubernetes.io/instance: my-deployment-name-of-my-component
559 type: ClusterIP
560
561Previous example of StatefulSet would also match (except for the `postfix` part
562obviously).
563
564Creating Deployment or StatefulSet
565----------------------------------
566
567Deployment and StatefulSet should use the `apps/v1` (which has appeared in
568v1.9).
569As seen on the service part, the following parts are mandatory:
570
571.. code-block:: yaml
572
573 apiVersion: apps/v1
574 kind: StatefulSet
575 metadata:
576 name: {{ include "common.fullname" . }}
577 namespace: {{ include "common.namespace" . }}
578 labels: {{- include "common.labels" . | nindent 4 }}
579 spec:
580 selector:
581 matchLabels: {{- include "common.matchLabels" . | nindent 6 }}
582 # serviceName is only needed for StatefulSet
583 # put the postfix part only if you have add a postfix on the service name
584 serviceName: {{ include "common.servicename" . }}-{{ .Values.service.postfix }}
585 <...>
586 template:
587 metadata:
588 labels: {{- include "common.labels" . | nindent 8 }}
589 annotations: {{- include "common.tplValue" (dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
590 name: {{ include "common.name" . }}
591 spec:
592 <...>
593 containers:
594 - name: {{ include "common.name" . }}
Roger Maitland953b5f12018-03-22 15:24:04 -0400595
596ONAP Application Configuration
597------------------------------
598
Roger Maitlandac643812018-03-28 09:52:34 -0400599Dependency Management
600---------------------
601These Helm charts describe the desired state
602of an ONAP deployment and instruct the Kubernetes container manager as to how
603to maintain the deployment in this state. These dependencies dictate the order
604in-which the containers are started for the first time such that such
605dependencies are always met without arbitrary sleep times between container
606startups. For example, the SDC back-end container requires the Elastic-Search,
607Cassandra and Kibana containers within SDC to be ready and is also dependent on
608DMaaP (or the message-router) to be ready - where ready implies the built-in
609"readiness" probes succeeded - before becoming fully operational. When an
610initial deployment of ONAP is requested the current state of the system is NULL
611so ONAP is deployed by the Kubernetes manager as a set of Docker containers on
612one or more predetermined hosts. The hosts could be physical machines or
613virtual machines. When deploying on virtual machines the resulting system will
614be very similar to "Heat" based deployments, i.e. Docker containers running
615within a set of VMs, the primary difference being that the allocation of
616containers to VMs is done dynamically with OOM and statically with "Heat".
617Example SO deployment descriptor file shows SO's dependency on its mariadb
618data-base component:
619
620SO deployment specification excerpt:
621
622.. code-block:: yaml
623
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100624 apiVersion: apps/v1
Roger Maitlandac643812018-03-28 09:52:34 -0400625 kind: Deployment
626 metadata:
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100627 name: {{ include "common.fullname" . }}
Roger Maitlandac643812018-03-28 09:52:34 -0400628 namespace: {{ include "common.namespace" . }}
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100629 labels: {{- include "common.labels" . | nindent 4 }}
Roger Maitlandac643812018-03-28 09:52:34 -0400630 spec:
631 replicas: {{ .Values.replicaCount }}
Sylvain Desbureaux60c74802019-12-12 14:35:01 +0100632 selector:
633 matchLabels: {{- include "common.matchLabels" . | nindent 6 }}
Roger Maitlandac643812018-03-28 09:52:34 -0400634 template:
635 metadata:
636 labels:
637 app: {{ include "common.name" . }}
638 release: {{ .Release.Name }}
639 spec:
640 initContainers:
641 - command:
Sylvain Desbureaux1694e1d2020-08-21 09:58:25 +0200642 - /app/ready.py
Roger Maitlandac643812018-03-28 09:52:34 -0400643 args:
644 - --container-name
645 - so-mariadb
646 env:
647 ...
648
649Kubernetes Container Orchestration
650==================================
651The ONAP components are managed by the Kubernetes_ container management system
652which maintains the desired state of the container system as described by one
653or more deployment descriptors - similar in concept to OpenStack HEAT
654Orchestration Templates. The following sections describe the fundamental
655objects managed by Kubernetes, the network these components use to communicate
656with each other and other entities outside of ONAP and the templates that
657describe the configuration and desired state of the ONAP components.
658
659Name Spaces
660-----------
Sylvain Desbureaux5b7440b2019-01-28 16:49:14 +0100661Within the namespaces are Kubernetes services that provide external
662connectivity to pods that host Docker containers.
Roger Maitlandac643812018-03-28 09:52:34 -0400663
664ONAP Components to Kubernetes Object Relationships
665--------------------------------------------------
666Kubernetes deployments consist of multiple objects:
667
668- **nodes** - a worker machine - either physical or virtual - that hosts
669 multiple containers managed by Kubernetes.
670- **services** - an abstraction of a logical set of pods that provide a
671 micro-service.
672- **pods** - one or more (but typically one) container(s) that provide specific
673 application functionality.
674- **persistent volumes** - One or more permanent volumes need to be established
675 to hold non-ephemeral configuration and state data.
676
677The relationship between these objects is shown in the following figure:
678
679.. .. uml::
680..
681.. @startuml
682.. node PH {
683.. component Service {
684.. component Pod0
685.. component Pod1
686.. }
687.. }
688..
689.. database PV
690.. @enduml
691
692.. figure:: kubernetes_objects.png
693
694OOM uses these Kubernetes objects as described in the following sections.
695
696Nodes
697~~~~~
698OOM works with both physical and virtual worker machines.
699
700* Virtual Machine Deployments - If ONAP is to be deployed onto a set of virtual
701 machines, the creation of the VMs is outside of the scope of OOM and could be
702 done in many ways, such as
703
704 * manually, for example by a user using the OpenStack Horizon dashboard or
705 AWS EC2, or
706 * automatically, for example with the use of a OpenStack Heat Orchestration
707 Template which builds an ONAP stack, Azure ARM template, AWS CloudFormation
708 Template, or
709 * orchestrated, for example with Cloudify creating the VMs from a TOSCA
710 template and controlling their life cycle for the life of the ONAP
711 deployment.
712
713* Physical Machine Deployments - If ONAP is to be deployed onto physical
714 machines there are several options but the recommendation is to use Rancher
715 along with Helm to associate hosts with a Kubernetes cluster.
716
717Pods
718~~~~
719A group of containers with shared storage and networking can be grouped
720together into a Kubernetes pod. All of the containers within a pod are
721co-located and co-scheduled so they operate as a single unit. Within ONAP
722Amsterdam release, pods are mapped one-to-one to docker containers although
723this may change in the future. As explained in the Services section below the
724use of Pods within each ONAP component is abstracted from other ONAP
725components.
726
727Services
728~~~~~~~~
729OOM uses the Kubernetes service abstraction to provide a consistent access
730point for each of the ONAP components independent of the pod or container
731architecture of that component. For example, the SDNC component may introduce
732OpenDaylight clustering as some point and change the number of pods in this
733component to three or more but this change will be isolated from the other ONAP
734components by the service abstraction. A service can include a load balancer
735on its ingress to distribute traffic between the pods and even react to dynamic
736changes in the number of pods if they are part of a replica set.
737
738Persistent Volumes
739~~~~~~~~~~~~~~~~~~
740To enable ONAP to be deployed into a wide variety of cloud infrastructures a
741flexible persistent storage architecture, built on Kubernetes persistent
742volumes, provides the ability to define the physical storage in a central
743location and have all ONAP components securely store their data.
744
745When deploying ONAP into a public cloud, available storage services such as
746`AWS Elastic Block Store`_, `Azure File`_, or `GCE Persistent Disk`_ are
747options. Alternatively, when deploying into a private cloud the storage
748architecture might consist of Fiber Channel, `Gluster FS`_, or iSCSI. Many
749other storage options existing, refer to the `Kubernetes Storage Class`_
750documentation for a full list of the options. The storage architecture may vary
751from deployment to deployment but in all cases a reliable, redundant storage
752system must be provided to ONAP with which the state information of all ONAP
753components will be securely stored. The Storage Class for a given deployment is
754a single parameter listed in the ONAP values.yaml file and therefore is easily
755customized. Operation of this storage system is outside the scope of the OOM.
756
757.. code-block:: yaml
758
759 Insert values.yaml code block with storage block here
760
761Once the storage class is selected and the physical storage is provided, the
762ONAP deployment step creates a pool of persistent volumes within the given
763physical storage that is used by all of the ONAP components. ONAP components
764simply make a claim on these persistent volumes (PV), with a persistent volume
765claim (PVC), to gain access to their storage.
766
767The following figure illustrates the relationships between the persistent
768volume claims, the persistent volumes, the storage class, and the physical
769storage.
770
771.. graphviz::
772
773 digraph PV {
774 label = "Persistance Volume Claim to Physical Storage Mapping"
775 {
776 node [shape=cylinder]
777 D0 [label="Drive0"]
778 D1 [label="Drive1"]
779 Dx [label="Drivex"]
780 }
781 {
782 node [shape=Mrecord label="StorageClass:ceph"]
783 sc
784 }
785 {
786 node [shape=point]
787 p0 p1 p2
788 p3 p4 p5
789 }
790 subgraph clusterSDC {
791 label="SDC"
792 PVC0
793 PVC1
794 }
795 subgraph clusterSDNC {
796 label="SDNC"
797 PVC2
798 }
799 subgraph clusterSO {
800 label="SO"
801 PVCn
802 }
803 PV0 -> sc
804 PV1 -> sc
805 PV2 -> sc
806 PVn -> sc
807
808 sc -> {D0 D1 Dx}
809 PVC0 -> PV0
810 PVC1 -> PV1
811 PVC2 -> PV2
812 PVCn -> PVn
813
814 # force all of these nodes to the same line in the given order
815 subgraph {
816 rank = same; PV0;PV1;PV2;PVn;p0;p1;p2
817 PV0->PV1->PV2->p0->p1->p2->PVn [style=invis]
818 }
819
820 subgraph {
821 rank = same; D0;D1;Dx;p3;p4;p5
822 D0->D1->p3->p4->p5->Dx [style=invis]
823 }
824
825 }
826
827In-order for an ONAP component to use a persistent volume it must make a claim
828against a specific persistent volume defined in the ONAP common charts. Note
829that there is a one-to-one relationship between a PVC and PV. The following is
830an excerpt from a component chart that defines a PVC:
831
832.. code-block:: yaml
833
834 Insert PVC example here
835
836OOM Networking with Kubernetes
837------------------------------
838
839- DNS
Sylvain Desbureaux5b7440b2019-01-28 16:49:14 +0100840- Ports - Flattening the containers also expose port conflicts between the
841 containers which need to be resolved.
Roger Maitlandac643812018-03-28 09:52:34 -0400842
843Node Ports
844~~~~~~~~~~
845
846Pod Placement Rules
847-------------------
848OOM will use the rich set of Kubernetes node and pod affinity /
849anti-affinity rules to minimize the chance of a single failure resulting in a
850loss of ONAP service. Node affinity / anti-affinity is used to guide the
851Kubernetes orchestrator in the placement of pods on nodes (physical or virtual
852machines). For example:
853
854- if a container used Intel DPDK technology the pod may state that it as
855 affinity to an Intel processor based node, or
856- geographical based node labels (such as the Kubernetes standard zone or
857 region labels) may be used to ensure placement of a DCAE complex close to the
858 VNFs generating high volumes of traffic thus minimizing networking cost.
859 Specifically, if nodes were pre-assigned labels East and West, the pod
860 deployment spec to distribute pods to these nodes would be:
861
862.. code-block:: yaml
863
864 nodeSelector:
865 failure-domain.beta.Kubernetes.io/region: {{ .Values.location }}
866
867- "location: West" is specified in the `values.yaml` file used to deploy
868 one DCAE cluster and "location: East" is specified in a second `values.yaml`
869 file (see OOM Configuration Management for more information about
870 configuration files like the `values.yaml` file).
871
872Node affinity can also be used to achieve geographic redundancy if pods are
873assigned to multiple failure domains. For more information refer to `Assigning
874Pods to Nodes`_.
875
876.. note::
877 One could use Pod to Node assignment to totally constrain Kubernetes when
878 doing initial container assignment to replicate the Amsterdam release
879 OpenStack Heat based deployment. Should one wish to do this, each VM would
880 need a unique node name which would be used to specify a node constaint
881 for every component. These assignment could be specified in an environment
882 specific values.yaml file. Constraining Kubernetes in this way is not
883 recommended.
884
885Kubernetes has a comprehensive system called Taints and Tolerations that can be
886used to force the container orchestrator to repel pods from nodes based on
887static events (an administrator assigning a taint to a node) or dynamic events
888(such as a node becoming unreachable or running out of disk space). There are
889no plans to use taints or tolerations in the ONAP Beijing release. Pod
890affinity / anti-affinity is the concept of creating a spacial relationship
891between pods when the Kubernetes orchestrator does assignment (both initially
892an in operation) to nodes as explained in Inter-pod affinity and anti-affinity.
893For example, one might choose to co-located all of the ONAP SDC containers on a
894single node as they are not critical runtime components and co-location
895minimizes overhead. On the other hand, one might choose to ensure that all of
896the containers in an ODL cluster (SDNC and APPC) are placed on separate nodes
897such that a node failure has minimal impact to the operation of the cluster.
898An example of how pod affinity / anti-affinity is shown below:
899
900Pod Affinity / Anti-Affinity
901
902.. code-block:: yaml
903
904 apiVersion: v1
905 kind: Pod
906 metadata:
907 name: with-pod-affinity
908 spec:
909 affinity:
910 podAffinity:
911 requiredDuringSchedulingIgnoredDuringExecution:
912 - labelSelector:
913 matchExpressions:
914 - key: security
915 operator: In
916 values:
917 - S1
918 topologyKey: failure-domain.beta.Kubernetes.io/zone
919 podAntiAffinity:
920 preferredDuringSchedulingIgnoredDuringExecution:
921 - weight: 100
922 podAffinityTerm:
923 labelSelector:
924 matchExpressions:
925 - key: security
926 operator: In
927 values:
928 - S2
929 topologyKey: Kubernetes.io/hostname
930 containers:
931 - name: with-pod-affinity
932 image: gcr.io/google_containers/pause:2.0
933
934This example contains both podAffinity and podAntiAffinity rules, the first
935rule is is a must (requiredDuringSchedulingIgnoredDuringExecution) while the
936second will be met pending other considerations
937(preferredDuringSchedulingIgnoredDuringExecution). Preemption Another feature
938that may assist in achieving a repeatable deployment in the presence of faults
939that may have reduced the capacity of the cloud is assigning priority to the
940containers such that mission critical components have the ability to evict less
941critical components. Kubernetes provides this capability with Pod Priority and
942Preemption. Prior to having more advanced production grade features available,
943the ability to at least be able to re-deploy ONAP (or a subset of) reliably
944provides a level of confidence that should an outage occur the system can be
945brought back on-line predictably.
946
947Health Checks
948-------------
949
950Monitoring of ONAP components is configured in the agents within JSON files and
951stored in gerrit under the consul-agent-config, here is an example from the AAI
952model loader (aai-model-loader-health.json):
953
954.. code-block:: json
955
956 {
957 "service": {
958 "name": "A&AI Model Loader",
959 "checks": [
960 {
961 "id": "model-loader-process",
962 "name": "Model Loader Presence",
963 "script": "/consul/config/scripts/model-loader-script.sh",
964 "interval": "15s",
965 "timeout": "1s"
966 }
967 ]
968 }
969 }
970
971Liveness Probes
972---------------
973
974These liveness probes can simply check that a port is available, that a
975built-in health check is reporting good health, or that the Consul health check
976is positive. For example, to monitor the SDNC component has following liveness
977probe can be found in the SDNC DB deployment specification:
978
979.. code-block:: yaml
980
981 sdnc db liveness probe
982
983 livenessProbe:
984 exec:
985 command: ["mysqladmin", "ping"]
986 initialDelaySeconds: 30 periodSeconds: 10
987 timeoutSeconds: 5
988
989The 'initialDelaySeconds' control the period of time between the readiness
990probe succeeding and the liveness probe starting. 'periodSeconds' and
991'timeoutSeconds' control the actual operation of the probe. Note that
992containers are inherently ephemeral so the healing action destroys failed
993containers and any state information within it. To avoid a loss of state, a
994persistent volume should be used to store all data that needs to be persisted
995over the re-creation of a container. Persistent volumes have been created for
996the database components of each of the projects and the same technique can be
997used for all persistent state information.
998
999
1000
Roger Maitland953b5f12018-03-22 15:24:04 -04001001Environment Files
1002~~~~~~~~~~~~~~~~~
1003
1004MSB Integration
1005===============
Roger Maitlandac643812018-03-28 09:52:34 -04001006
1007The \ `Microservices Bus
1008Project <https://wiki.onap.org/pages/viewpage.action?pageId=3246982>`__ provides
1009facilities to integrate micro-services into ONAP and therefore needs to
1010integrate into OOM - primarily through Consul which is the backend of
1011MSB service discovery. The following is a brief description of how this
1012integration will be done:
1013
1014A registrator to push the service endpoint info to MSB service
Eric Debeau993b77b2020-08-19 15:30:00 +02001015discovery.
Roger Maitlandac643812018-03-28 09:52:34 -04001016
1017- The needed service endpoint info is put into the kubernetes yaml file
1018 as annotation, including service name, Protocol,version, visual
1019 range,LB method, IP, Port,etc.
1020
1021- OOM deploy/start/restart/scale in/scale out/upgrade ONAP components
1022
1023- Registrator watch the kubernetes event
1024
1025- When an ONAP component instance has been started/destroyed by OOM,
1026 Registrator get the notification from kubernetes
1027
1028- Registrator parse the service endpoint info from annotation and
1029 register/update/unregister it to MSB service discovery
1030
1031- MSB API Gateway uses the service endpoint info for service routing
1032 and load balancing.
1033
1034Details of the registration service API can be found at \ `Microservice
1035Bus API
1036Documentation <https://wiki.onap.org/display/DW/Microservice+Bus+API+Documentation>`__.
1037
1038ONAP Component Registration to MSB
1039----------------------------------
1040The charts of all ONAP components intending to register against MSB must have
1041an annotation in their service(s) template. A `sdc` example follows:
1042
1043.. code-block:: yaml
1044
1045 apiVersion: v1
1046 kind: Service
1047 metadata:
1048 labels:
1049 app: sdc-be
1050 name: sdc-be
1051 namespace: "{{ .Values.nsPrefix }}"
1052 annotations:
1053 msb.onap.org/service-info: '[
1054 {
1055 "serviceName": "sdc",
1056 "version": "v1",
1057 "url": "/sdc/v1",
1058 "protocol": "REST",
1059 "port": "8080",
1060 "visualRange":"1"
1061 },
1062 {
1063 "serviceName": "sdc-deprecated",
1064 "version": "v1",
1065 "url": "/sdc/v1",
1066 "protocol": "REST",
1067 "port": "8080",
1068 "visualRange":"1",
1069 "path":"/sdc/v1"
1070 }
1071 ]'
1072 ...
1073
1074
1075MSB Integration with OOM
1076------------------------
1077A preliminary view of the OOM-MSB integration is as follows:
1078
1079.. figure:: MSB-OOM-Diagram.png
1080
1081A message sequence chart of the registration process:
1082
1083.. uml::
1084
1085 participant "OOM" as oom
1086 participant "ONAP Component" as onap
1087 participant "Service Discovery" as sd
1088 participant "External API Gateway" as eagw
1089 participant "Router (Internal API Gateway)" as iagw
1090
1091 box "MSB" #LightBlue
1092 participant sd
1093 participant eagw
1094 participant iagw
1095 end box
1096
1097 == Deploy Servcie ==
1098
1099 oom -> onap: Deploy
1100 oom -> sd: Register service endpoints
1101 sd -> eagw: Services exposed to external system
1102 sd -> iagw: Services for internal use
1103
1104 == Component Life-cycle Management ==
1105
1106 oom -> onap: Start/Stop/Scale/Migrate/Upgrade
1107 oom -> sd: Update service info
1108 sd -> eagw: Update service info
1109 sd -> iagw: Update service info
1110
1111 == Service Health Check ==
1112
1113 sd -> onap: Check the health of service
1114 sd -> eagw: Update service status
1115 sd -> iagw: Update service status
1116
1117
1118MSB Deployment Instructions
1119---------------------------
1120MSB is helm installable ONAP component which is often automatically deployed.
1121To install it individually enter::
1122
1123 > helm install <repo-name>/msb
1124
1125.. note::
1126 TBD: Vaidate if the following procedure is still required.
1127
1128Please note that Kubernetes authentication token must be set at
1129*kubernetes/kube2msb/values.yaml* so the kube2msb registrator can get the
1130access to watch the kubernetes events and get service annotation by
1131Kubernetes APIs. The token can be found in the kubectl configuration file
1132*~/.kube/config*
1133
1134More details can be found here `MSB installation <http://onap.readthedocs.io/en/latest/submodules/msb/apigateway.git/docs/platform/installation.html>`__.
1135
Roger Maitland953b5f12018-03-22 15:24:04 -04001136.. MISC
1137.. ====
1138.. Note that although OOM uses Kubernetes facilities to minimize the effort
Sylvain Desbureaux5b7440b2019-01-28 16:49:14 +01001139.. required of the ONAP component owners to implement a successful rolling
1140.. upgrade strategy there are other considerations that must be taken into
1141.. consideration.
Roger Maitland953b5f12018-03-22 15:24:04 -04001142.. For example, external APIs - both internal and external to ONAP - should be
Sylvain Desbureaux5b7440b2019-01-28 16:49:14 +01001143.. designed to gracefully accept transactions from a peer at a different
1144.. software version to avoid deadlock situations. Embedded version codes in
1145.. messages may facilitate such capabilities.
Roger Maitland953b5f12018-03-22 15:24:04 -04001146..
Sylvain Desbureaux5b7440b2019-01-28 16:49:14 +01001147.. Within each of the projects a new configuration repository contains all of
1148.. the project specific configuration artifacts. As changes are made within
1149.. the project, it's the responsibility of the project team to make appropriate
Roger Maitland953b5f12018-03-22 15:24:04 -04001150.. changes to the configuration data.