blob: 1c18044b70dbe09b7d17436621926a82ac6da54b [file] [log] [blame]
Roger Maitland953b5f12018-03-22 15:24:04 -04001.. This work is licensed under a Creative Commons Attribution 4.0 International License.
2.. http://creativecommons.org/licenses/by/4.0
3.. Copyright 2018 Amdocs, Bell Canada
4
5.. Links
6.. _Helm: https://docs.helm.sh/
7.. _Kubernetes: https://Kubernetes.io/
8.. _Docker: https://www.docker.com/
9.. _Nexus: https://nexus.onap.org/#welcome
10.. _AWS Elastic Block Store: https://aws.amazon.com/ebs/
11.. _Azure File: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction
12.. _GCE Persistent Disk: https://cloud.google.com/compute/docs/disks/
13.. _Gluster FS: https://www.gluster.org/
14.. _Kubernetes Storage Class: https://Kubernetes.io/docs/concepts/storage/storage-classes/
15.. _Assigning Pods to Nodes: https://Kubernetes.io/docs/concepts/configuration/assign-pod-node/
16
17.. _developer-guide-label:
18
19OOM Developer Guide
20###################
21
22.. figure:: oomLogoV2-medium.png
23 :align: right
24
25ONAP consists of a large number of components, each of which are substantial
26projects within themselves, which results in a high degree of complexity in
27deployment and management. To cope with this complexity the ONAP Operations
28Manager (OOM) uses a Helm_ model of ONAP - Helm being the primary management
29system for Kubernetes_ container systems - to drive all user driven life-cycle
30management operations. The Helm model of ONAP is composed of a set of
31hierarchical Helm charts that define the structure of the ONAP components and
32the configuration of these components. These charts are fully parameterized
33such that a single environment file defines all of the parameters needed to
34deploy ONAP. A user of ONAP may maintain several such environment files to
35control the deployment of ONAP in multiple environments such as development,
36pre-production, and production.
37
38The following sections describe how the ONAP Helm charts are constructed.
39
40.. contents::
41 :depth: 3
42 :local:
43..
44
45Container Background
46====================
47Linux containers allow for an application and all of its operating system
48dependencies to be packaged and deployed as a single unit without including a
49guest operating system as done with virtual machines. The most popular
50container solution is Docker_ which provides tools for container management
51like the Docker Host (dockerd) which can create, run, stop, move, or delete a
52container. Docker has a very popular registry of containers images that can be
53used by any Docker system; however, in the ONAP context, Docker images are
54built by the standard CI/CD flow and stored in Nexus_ repositories. OOM uses
55the "standard" ONAP docker containers and three new ones specifically created
56for OOM.
57
58Containers are isolated from each other primarily via name spaces within the
59Linux kernel without the need for multiple guest operating systems. As such,
60multiple containers can be deployed with little overhead such as all of ONAP
61can be deployed on a single host. With some optimization of the ONAP components
62(e.g. elimination of redundant database instances) it may be possible to deploy
63ONAP on a single laptop computer.
64
65Helm Charts
66===========
67
68Standard Chart Format
69---------------------
70
71Helm charts are available in the open-source community for a wide variety of
72common software components which are used in ONAP and where possible.
73
74
75Chart Hierarchy
76---------------
77
78Dependency Management
79---------------------
80These Helm charts describe the desired state
81of an ONAP deployment and instruct the Kubernetes container manager as to how
82to maintain the deployment in this state. These dependencies dictate the order
83in-which the containers are started for the first time such that such
84dependencies are always met without arbitrary sleep times between container
85startups. For example, the SDC back-end container requires the Elastic-Search,
86Cassandra and Kibana containers within SDC to be ready and is also dependent on
87DMaaP (or the message-router) to be ready - where ready implies the built-in
88"readiness" probes succeeded - before becoming fully operational. When an
89initial deployment of ONAP is requested the current state of the system is NULL
90so ONAP is deployed by the Kubernetes manager as a set of Docker containers on
91one or more predetermined hosts. The hosts could be physical machines or
92virtual machines. When deploying on virtual machines the resulting system will
93be very similar to "Heat" based deployments, i.e. Docker containers running
94within a set of VMs, the primary difference being that the allocation of
95containers to VMs is done dynamically with OOM and statically with "Heat".
96Example SO deployment descriptor file shows SO's dependency on its mariadb
97data-base component:
98
99SO deployment specification excerpt:
100
101.. code-block:: yaml
102
103 apiVersion: extensions/v1beta1
104 kind: Deployment
105 metadata:
106 name: {{ include "common.name" . }}
107 namespace: {{ include "common.namespace" . }}
108 labels:
109 app: {{ include "common.name" . }}
110 chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
111 release: {{ .Release.Name }}
112 heritage: {{ .Release.Service }}
113 spec:
114 replicas: {{ .Values.replicaCount }}
115 template:
116 metadata:
117 labels:
118 app: {{ include "common.name" . }}
119 release: {{ .Release.Name }}
120 spec:
121 initContainers:
122 - command:
123 - /root/ready.py
124 args:
125 - --container-name
126 - so-mariadb
127 env:
128 ...
129
130Kubernetes Container Orchestration
131==================================
132The ONAP components are managed by the Kubernetes_ container management system
133which maintains the desired state of the container system as described by one
134or more deployment descriptors - similar in concept to OpenStack HEAT
135Orchestration Templates. The following sections describe the fundamental
136objects managed by Kubernetes, the network these components use to communicate
137with each other and other entities outside of ONAP and the templates that
138describe the configuration and desired state of the ONAP components.
139
140Name Spaces
141-----------
142Within the namespaces are Kubernetes services that provide external connectivity to pods that host Docker containers.
143
144ONAP Components to Kubernetes Object Relationships
145--------------------------------------------------
146Kubernetes deployments consist of multiple objects:
147
148- **nodes** - a worker machine - either physical or virtual - that hosts
149 multiple containers managed by Kubernetes.
150- **services** - an abstraction of a logical set of pods that provide a
151 micro-service.
152- **pods** - one or more (but typically one) container(s) that provide specific
153 application functionality.
154- **persistent volumes** - One or more permanent volumes need to be established
155 to hold non-ephemeral configuration and state data.
156
157The relationship between these objects is shown in the following figure:
158
159.. .. uml::
160..
161.. @startuml
162.. node PH {
163.. component Service {
164.. component Pod0
165.. component Pod1
166.. }
167.. }
168..
169.. database PV
170.. @enduml
171
172.. figure:: kubernetes_objects.png
173
174OOM uses these Kubernetes objects as described in the following sections.
175
176Nodes
177~~~~~
178OOM works with both physical and virtual worker machines.
179
180* Virtual Machine Deployments - If ONAP is to be deployed onto a set of virtual machines, the creation of the VMs is outside of the scope of OOM and could be done in many ways, such as
181
182 * manually, for example by a user using the OpenStack Horizon dashboard or AWS EC2, or
183 * automatically, for example with the use of a OpenStack Heat Orchestration Template which builds an ONAP stack, Azure ARM template, AWS CloudFormation Template, or
184 * orchestrated, for example with Cloudify creating the VMs from a TOSCA template and controlling their life cycle for the life of the ONAP deployment.
185
186* Physical Machine Deployments - If ONAP is to be deployed onto physical machines there are several options but the recommendation is to use Rancher along with Helm to associate hosts with a Kubernetes cluster.
187
188Pods
189~~~~
190A group of containers with shared storage and networking can be grouped together into a Kubernetes pod. All of the containers within a pod are co-located and co-scheduled so they operate as a single unit. Within ONAP Amsterdam release, pods are mapped one-to-one to docker containers although this may change in the future. As explained in the Services section below the use of Pods within each ONAP component is abstracted from other ONAP components.
191
192Services
193~~~~~~~~
194OOM uses the Kubernetes service abstraction to provide a consistent access
195point for each of the ONAP components independent of the pod or container
196architecture of that component. For example, the SDNC component may introduce
197OpenDaylight clustering as some point and change the number of pods in this
198component to three or more but this change will be isolated from the other ONAP
199components by the service abstraction. A service can include a load balancer
200on its ingress to distribute traffic between the pods and even react to dynamic
201changes in the number of pods if they are part of a replica set.
202
203Persistent Volumes
204~~~~~~~~~~~~~~~~~~
205To enable ONAP to be deployed into a wide variety of cloud infrastructures a
206flexible persistent storage architecture, built on Kubernetes persistent
207volumes, provides the ability to define the physical storage in a central
208location and have all ONAP components securely store their data.
209
210When deploying ONAP into a public cloud, available storage services such as
211`AWS Elastic Block Store`_, `Azure File`_, or `GCE Persistent Disk`_ are
212options. Alternatively, when deploying into a private cloud the storage
213architecture might consist of Fiber Channel, `Gluster FS`_, or iSCSI. Many
214other storage options existing, refer to the `Kubernetes Storage Class`_
215documentation for a full list of the options. The storage architecture may vary
216from deployment to deployment but in all cases a reliable, redundant storage
217system must be provided to ONAP with which the state information of all ONAP
218components will be securely stored. The Storage Class for a given deployment is
219a single parameter listed in the ONAP values.yaml file and therefore is easily
220customized. Operation of this storage system is outside the scope of the OOM.
221
222.. code-block:: yaml
223
224 Insert values.yaml code block with storage block here
225
226Once the storage class is selected and the physical storage is provided, the
227ONAP deployment step creates a pool of persistent volumes within the given
228physical storage that is used by all of the ONAP components. ONAP components
229simply make a claim on these persistent volumes (PV), with a persistent volume
230claim (PVC), to gain access to their storage.
231
232The following figure illustrates the relationships between the persistent
233volume claims, the persistent volumes, the storage class, and the physical
234storage.
235
236.. graphviz::
237
238 digraph PV {
239 label = "Persistance Volume Claim to Physical Storage Mapping"
240 {
241 node [shape=cylinder]
242 D0 [label="Drive0"]
243 D1 [label="Drive1"]
244 Dx [label="Drivex"]
245 }
246 {
247 node [shape=Mrecord label="StorageClass:ceph"]
248 sc
249 }
250 {
251 node [shape=point]
252 p0 p1 p2
253 p3 p4 p5
254 }
255 subgraph clusterSDC {
256 label="SDC"
257 PVC0
258 PVC1
259 }
260 subgraph clusterSDNC {
261 label="SDNC"
262 PVC2
263 }
264 subgraph clusterSO {
265 label="SO"
266 PVCn
267 }
268 PV0 -> sc
269 PV1 -> sc
270 PV2 -> sc
271 PVn -> sc
272
273 sc -> {D0 D1 Dx}
274 PVC0 -> PV0
275 PVC1 -> PV1
276 PVC2 -> PV2
277 PVCn -> PVn
278
279 # force all of these nodes to the same line in the given order
280 subgraph {
281 rank = same; PV0;PV1;PV2;PVn;p0;p1;p2
282 PV0->PV1->PV2->p0->p1->p2->PVn [style=invis]
283 }
284
285 subgraph {
286 rank = same; D0;D1;Dx;p3;p4;p5
287 D0->D1->p3->p4->p5->Dx [style=invis]
288 }
289
290 }
291
292In-order for an ONAP component to use a persistent volume it must make a claim
293against a specific persistent volume defined in the ONAP common charts. Note
294that there is a one-to-one relationship between a PVC and PV. The following is
295an excerpt from a component chart that defines a PVC:
296
297.. code-block:: yaml
298
299 Insert PVC example here
300
301OOM Networking with Kubernetes
302------------------------------
303
304- DNS
305- Ports - Flattening the containers also expose port conflicts between the containers which need to be resolved.
306
307Node Ports
308~~~~~~~~~~
309
310Pod Placement Rules
311-------------------
312OOM will use the rich set of Kubernetes node and pod affinity /
313anti-affinity rules to minimize the chance of a single failure resulting in a
314loss of ONAP service. Node affinity / anti-affinity is used to guide the
315Kubernetes orchestrator in the placement of pods on nodes (physical or virtual
316machines). For example:
317
318- if a container used Intel DPDK technology the pod may state that it as
319 affinity to an Intel processor based node, or
320- geographical based node labels (such as the Kubernetes standard zone or
321 region labels) may be used to ensure placement of a DCAE complex close to the
322 VNFs generating high volumes of traffic thus minimizing networking cost.
323 Specifically, if nodes were pre-assigned labels East and West, the pod
324 deployment spec to distribute pods to these nodes would be:
325
326.. code-block:: yaml
327
328 nodeSelector:
329 failure-domain.beta.Kubernetes.io/region: {{ .Values.location }}
330
331- "location: West" is specified in the values.yaml file used to deploy
332 one DCAE cluster and "location: East" is specified in a second values.yaml
333 file (see OOM Configuration Management for more information about
334 configuration files like the values.yamlfile).
335
336Node affinity can also be used to achieve geographic redundancy if pods are
337assigned to multiple failure domains. For more information refer to `Assigning
338Pods to Nodes`_.
339
340.. note::
341 One could use Pod to Node assignment to totally constrain Kubernetes when
342 doing initial container assignment to replicate the Amsterdam release
343 OpenStack Heat based deployment. Should one wish to do this, each VM would
344 need a unique node name which would be used to specify a node constaint
345 for every component. These assignment could be specified in an environment
346 specific values.yaml file. Constraining Kubernetes in this way is not
347 recommended.
348
349Kubernetes has a comprehensive system called Taints and Tolerations that can be
350used to force the container orchestrator to repel pods from nodes based on
351static events (an administrator assigning a taint to a node) or dynamic events
352(such as a node becoming unreachable or running out of disk space). There are
353no plans to use taints or tolerations in the ONAP Beijing release. Pod
354affinity / anti-affinity is the concept of creating a spacial relationship
355between pods when the Kubernetes orchestrator does assignment (both initially
356an in operation) to nodes as explained in Inter-pod affinity and anti-affinity.
357For example, one might choose to co-located all of the ONAP SDC containers on a
358single node as they are not critical runtime components and co-location
359minimizes overhead. On the other hand, one might choose to ensure that all of
360the containers in an ODL cluster (SDNC and APPC) are placed on separate nodes
361such that a node failure has minimal impact to the operation of the cluster.
362An example of how pod affinity / anti-affinity is shown below:
363
364Pod Affinity / Anti-Affinity
365
366.. code-block:: yaml
367
368 apiVersion: v1
369 kind: Pod
370 metadata:
371 name: with-pod-affinity
372 spec:
373 affinity:
374 podAffinity:
375 requiredDuringSchedulingIgnoredDuringExecution:
376 - labelSelector:
377 matchExpressions:
378 - key: security
379 operator: In
380 values:
381 - S1
382 topologyKey: failure-domain.beta.Kubernetes.io/zone
383 podAntiAffinity:
384 preferredDuringSchedulingIgnoredDuringExecution:
385 - weight: 100
386 podAffinityTerm:
387 labelSelector:
388 matchExpressions:
389 - key: security
390 operator: In
391 values:
392 - S2
393 topologyKey: Kubernetes.io/hostname
394 containers:
395 - name: with-pod-affinity
396 image: gcr.io/google_containers/pause:2.0
397
398This example contains both podAffinity and podAntiAffinity rules, the first
399rule is is a must (requiredDuringSchedulingIgnoredDuringExecution) while the
400second will be met pending other considerations
401(preferredDuringSchedulingIgnoredDuringExecution). Preemption Another feature
402that may assist in achieving a repeatable deployment in the presence of faults
403that may have reduced the capacity of the cloud is assigning priority to the
404containers such that mission critical components have the ability to evict less
405critical components. Kubernetes provides this capability with Pod Priority and
406Preemption. Prior to having more advanced production grade features available,
407the ability to at least be able to re-deploy ONAP (or a subset of) reliably
408provides a level of confidence that should an outage occur the system can be
409brought back on-line predictably.
410
411Health Checks
412-------------
413
414Monitoring of ONAP components is configured in the agents within JSON files and
415stored in gerrit under the consul-agent-config, here is an example from the AAI
416model loader (aai-model-loader-health.json):
417
418.. code-block:: json
419
420 {
421 "service": {
422 "name": "A&AI Model Loader",
423 "checks": [
424 {
425 "id": "model-loader-process",
426 "name": "Model Loader Presence",
427 "script": "/consul/config/scripts/model-loader-script.sh",
428 "interval": "15s",
429 "timeout": "1s"
430 }
431 ]
432 }
433 }
434
435Liveness Probes
436---------------
437
438These liveness probes can simply check that a port is available, that a
439built-in health check is reporting good health, or that the Consul health check
440is positive. For example, to monitor the SDNC component has following liveness
441probe can be found in the SDNC DB deployment specification:
442
443.. code-block:: yaml
444
445 sdnc db liveness probe
446
447 livenessProbe:
448 exec:
449 command: ["mysqladmin", "ping"]
450 initialDelaySeconds: 30 periodSeconds: 10
451 timeoutSeconds: 5
452
453The 'initialDelaySeconds' control the period of time between the readiness
454probe succeeding and the liveness probe starting. 'periodSeconds' and
455'timeoutSeconds' control the actual operation of the probe. Note that
456containers are inherently ephemeral so the healing action destroys failed
457containers and any state information within it. To avoid a loss of state, a
458persistent volume should be used to store all data that needs to be persisted
459over the re-creation of a container. Persistent volumes have been created for
460the database components of each of the projects and the same technique can be
461used for all persistent state information.
462
463
464Configuration Management
465========================
466
467ONAP is a large system composed of many components - each of which are complex
468systems in themselves - that needs to be deployed in a number of different
469ways. For example, within a single operator's network there may be R&D
470deployments under active development, pre-production versions undergoing system
471testing and production systems that are operating live networks. Each of these
472deployments will differ in significant ways, such as the version of the
473software images deployed. In addition, there may be a number of application
474specific configuration differences, such as operating system environment
475variables. The following describes how the Helm configuration management
476system is used within the OOM project to manage both ONAP infrastructure
477configuration as well as ONAP components configuration.
478
479One of the artifacts that OOM/Kubernetes uses to deploy ONAP components is the
480deployment specification, yet another yaml file. Within these deployment specs
481are a number of parameters as shown in the following mariadb example:
482
483.. code-block:: yaml
484
485 apiVersion: extensions/v1beta1
486 kind: Deployment
487 metadata:
488 name: mariadb
489 spec:
490 <...>
491 template:
492 <...>
493 spec:
494 hostname: mariadb
495 containers:
496 - args:
497 image: nexus3.onap.org:10001/mariadb:10.1.11
498 name: "mariadb"
499 env:
500 - name: MYSQL_ROOT_PASSWORD
501 value: password
502 - name: MARIADB_MAJOR
503 value: "10.1"
504 <...>
505 imagePullSecrets:
506 - name: onap-docker-registry-key
507
508Note that within the deployment specification, one of the container arguments
509is the key/value pair image: nexus3.onap.org:10001/mariadb:10.1.11 which
510specifies the version of the mariadb software to deploy. Although the
511deployment specifications greatly simplify deployment, maintenance of the
512deployment specifications themselves become problematic as software versions
513change over time or as different versions are required for different
514deployments. For example, if the R&D team needs to deploy a newer version of
515mariadb than what is currently used in the production environment, they would
516need to clone the deployment specification and change this value. Fortunately,
517this problem has been solved with the templating capabilities of Helm.
518
519The following example shows how the deployment specifications are modified to
520incorporate Helm templates such that key/value pairs can be defined outside of
521the deployment specifications and passed during instantiation of the component.
522
523.. code-block:: yaml
524
525 apiVersion: extensions/v1beta1
526 kind: Deployment
527 metadata:
528 name: mariadb
529 namespace: "{{ .Values.nsPrefix }}-mso"
530 spec:
531 <...>
532 template:
533 <...>
534 spec:
535 hostname: mariadb
536 containers:
537 - args:
538 image: {{ .Values.image.mariadb }}
539 imagePullPolicy: {{ .Values.pullPolicy }}
540 name: "mariadb"
541 env:
542 - name: MYSQL_ROOT_PASSWORD
543 value: password
544 - name: MARIADB_MAJOR
545 value: "10.1"
546 <...>
547 imagePullSecrets:
548 - name: "{{ .Values.nsPrefix }}-docker-registry-key"apiVersion: extensions/v1beta1
549 kind: Deployment
550 metadata:
551 name: mariadb
552 namespace: "{{ .Values.nsPrefix }}-mso"
553 spec:
554 <...>
555 template:
556 <...>
557 spec:
558 hostname: mariadb
559 containers:
560 - args:
561 image: {{ .Values.image.mariadb }}
562 imagePullPolicy: {{ .Values.pullPolicy }}
563 name: "mariadb"
564 env:
565 - name: MYSQL_ROOT_PASSWORD
566 value: password
567 - name: MARIADB_MAJOR
568 value: "10.1"
569 <...>
570 imagePullSecrets:
571 - name: "{{ .Values.nsPrefix }}-docker-registry-key"
572
573This version of the deployment specification has gone through the process of
574templating values that are likely to change between deployments. Note that the
575image is now specified as: image: {{ .Values.image.mariadb }} instead of a
576string used previously. During the deployment phase, Helm (actually the Helm
577sub-component Tiller) substitutes the {{ .. }} entries with a variable defined
578in a values.yaml file. The content of this file is as follows:
579
580.. code-block:: yaml
581
582 nsPrefix: onap
583 pullPolicy: IfNotPresent
584 image:
585 readiness: oomk8s/readiness-check:1.0.0
586 mso: nexus3.onap.org:10001/openecomp/mso:1.0-STAGING-latest
587 mariadb: nexus3.onap.org:10001/mariadb:10.1.11
588
589Within the values.yaml file there is an image section with the key/value pair
590mariadb: nexus3.onap.org:10001/mariadb:10.1.11 which is the same value used in
591the non-templated version. Once all of the substitutions are complete, the
592resulting deployment specification ready to be used by Kubernetes.
593
594Also note that in this example, the namespace key/value pair is specified in
595the values.yaml file. This key/value pair will be global across the entire
596ONAP deployment and is therefore a prime example of where configuration
597hierarchy can be very useful.
598
599When creating a deployment template consider the use of default values if
600appropriate. Helm templating has built in support for DEFAULT values, here is
601an example:
602
603.. code-block:: yaml
604
605 imagePullSecrets:
606 - name: "{{ .Values.nsPrefix | default "onap" }}-docker-registry-key"
607
608The pipeline operator ("|") used here hints at that power of Helm templates in
609that much like an operating system command line the pipeline operator allow
610over 60 Helm functions to be embedded directly into the template (note that the
611Helm template language is a superset of the Go template language). These
612functions include simple string operations like upper and more complex flow
613control operations like if/else.
614
615
616ONAP Application Configuration
617------------------------------
618
619Environment Files
620~~~~~~~~~~~~~~~~~
621
622MSB Integration
623===============
624.. MISC
625.. ====
626.. Note that although OOM uses Kubernetes facilities to minimize the effort
627.. required of the ONAP component owners to implement a successful rolling upgrade
628.. strategy there are other considerations that must be taken into consideration.
629.. For example, external APIs - both internal and external to ONAP - should be
630.. designed to gracefully accept transactions from a peer at a different software
631.. version to avoid deadlock situations. Embedded version codes in messages may
632.. facilitate such capabilities.
633..
634.. Within each of the projects a new configuration repository contains all of the
635.. project specific configuration artifacts. As changes are made within the
636.. project, it's the responsibility of the project team to make appropriate
637.. changes to the configuration data.