Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 1 | .. This work is licensed under a Creative Commons Attribution 4.0 International License. |
| 2 | .. http://creativecommons.org/licenses/by/4.0 |
| 3 | .. Copyright 2018 Amdocs, Bell Canada |
| 4 | |
| 5 | .. Links |
| 6 | .. _Helm: https://docs.helm.sh/ |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 7 | .. _Helm Charts: https://github.com/kubernetes/charts |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 8 | .. _Kubernetes: https://Kubernetes.io/ |
| 9 | .. _Docker: https://www.docker.com/ |
| 10 | .. _Nexus: https://nexus.onap.org/#welcome |
| 11 | .. _AWS Elastic Block Store: https://aws.amazon.com/ebs/ |
| 12 | .. _Azure File: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction |
| 13 | .. _GCE Persistent Disk: https://cloud.google.com/compute/docs/disks/ |
| 14 | .. _Gluster FS: https://www.gluster.org/ |
| 15 | .. _Kubernetes Storage Class: https://Kubernetes.io/docs/concepts/storage/storage-classes/ |
| 16 | .. _Assigning Pods to Nodes: https://Kubernetes.io/docs/concepts/configuration/assign-pod-node/ |
| 17 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 18 | |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 19 | .. _developer-guide-label: |
| 20 | |
| 21 | OOM Developer Guide |
| 22 | ################### |
| 23 | |
| 24 | .. figure:: oomLogoV2-medium.png |
| 25 | :align: right |
| 26 | |
| 27 | ONAP consists of a large number of components, each of which are substantial |
| 28 | projects within themselves, which results in a high degree of complexity in |
| 29 | deployment and management. To cope with this complexity the ONAP Operations |
| 30 | Manager (OOM) uses a Helm_ model of ONAP - Helm being the primary management |
| 31 | system for Kubernetes_ container systems - to drive all user driven life-cycle |
| 32 | management operations. The Helm model of ONAP is composed of a set of |
| 33 | hierarchical Helm charts that define the structure of the ONAP components and |
| 34 | the configuration of these components. These charts are fully parameterized |
| 35 | such that a single environment file defines all of the parameters needed to |
| 36 | deploy ONAP. A user of ONAP may maintain several such environment files to |
| 37 | control the deployment of ONAP in multiple environments such as development, |
| 38 | pre-production, and production. |
| 39 | |
| 40 | The following sections describe how the ONAP Helm charts are constructed. |
| 41 | |
| 42 | .. contents:: |
| 43 | :depth: 3 |
| 44 | :local: |
| 45 | .. |
| 46 | |
| 47 | Container Background |
| 48 | ==================== |
| 49 | Linux containers allow for an application and all of its operating system |
| 50 | dependencies to be packaged and deployed as a single unit without including a |
| 51 | guest operating system as done with virtual machines. The most popular |
| 52 | container solution is Docker_ which provides tools for container management |
| 53 | like the Docker Host (dockerd) which can create, run, stop, move, or delete a |
| 54 | container. Docker has a very popular registry of containers images that can be |
| 55 | used by any Docker system; however, in the ONAP context, Docker images are |
| 56 | built by the standard CI/CD flow and stored in Nexus_ repositories. OOM uses |
| 57 | the "standard" ONAP docker containers and three new ones specifically created |
| 58 | for OOM. |
| 59 | |
| 60 | Containers are isolated from each other primarily via name spaces within the |
| 61 | Linux kernel without the need for multiple guest operating systems. As such, |
| 62 | multiple containers can be deployed with little overhead such as all of ONAP |
| 63 | can be deployed on a single host. With some optimization of the ONAP components |
| 64 | (e.g. elimination of redundant database instances) it may be possible to deploy |
| 65 | ONAP on a single laptop computer. |
| 66 | |
| 67 | Helm Charts |
| 68 | =========== |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 69 | A Helm chart is a collection of files that describe a related set of Kubernetes |
| 70 | resources. A simple chart might be used to deploy something simple, like a |
| 71 | memcached pod, while a complex chart might contain many micro-service arranged |
| 72 | in a hierarchy as found in the `aai` ONAP component. |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 73 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 74 | Charts are created as files laid out in a particular directory tree, then they |
| 75 | can be packaged into versioned archives to be deployed. There is a public |
| 76 | archive of `Helm Charts`_ on GitHub that includes many technologies applicable |
| 77 | to ONAP. Some of these charts have been used in ONAP and all of the ONAP charts |
| 78 | have been created following the guidelines provided. |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 79 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 80 | The top level of the ONAP charts is shown below: |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 81 | |
| 82 | .. graphviz:: |
| 83 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 84 | digraph onap_top_chart { |
| 85 | rankdir="LR"; |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 86 | { |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 87 | node [shape=folder] |
| 88 | oValues [label="values.yaml"] |
| 89 | oChart [label="Chart.yaml"] |
| 90 | dev [label="dev.yaml"] |
| 91 | prod [label="prod.yaml"] |
| 92 | crb [label="clusterrolebindings.yaml"] |
| 93 | secrets [label="secrets.yaml"] |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 94 | } |
| 95 | { |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 96 | node [style=dashed] |
| 97 | vCom [label="component"] |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 98 | } |
| 99 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 100 | onap -> oValues |
| 101 | onap -> oChart |
| 102 | onap -> templates |
| 103 | onap -> resources |
| 104 | oValues -> vCom |
| 105 | resources -> environments |
| 106 | environments -> dev |
| 107 | environments -> prod |
| 108 | templates -> crb |
| 109 | templates -> secrets |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 110 | } |
| 111 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 112 | Within the `values.yaml` file at the `onap` level, one will find a set of |
| 113 | boolean values that control which of the ONAP components get deployed as shown |
| 114 | below: |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 115 | |
| 116 | .. code-block:: yaml |
| 117 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 118 | aaf: # Application Authorization Framework |
| 119 | enabled: false |
| 120 | <...> |
| 121 | so: # Service Orchestrator |
| 122 | enabled: true |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 123 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 124 | By setting these flags a custom deployment can be created and used during |
| 125 | deployment by using the `-f` Helm option as follows:: |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 126 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 127 | > helm install local/onap -name development -f dev.yaml |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 128 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 129 | Note that there are one or more example deployment files in the |
| 130 | `onap/resources/environments/` directory. It is best practice to create a unique |
| 131 | deployment file for each environment used to ensure consistent behaviour. |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 132 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 133 | To aid in the long term supportability of ONAP, a set of common charts have |
| 134 | been created (and will be expanded in subsequent releases of ONAP) that can be |
| 135 | used by any of the ONAP components by including the common component in its |
| 136 | `requirements.yaml` file. The common components are arranged as follows: |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 137 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 138 | .. graphviz:: |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 139 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 140 | digraph onap_common_chart { |
| 141 | rankdir="LR"; |
| 142 | { |
| 143 | node [shape=folder] |
| 144 | mValues [label="values.yaml"] |
| 145 | ccValues [label="values.yaml"] |
| 146 | comValues [label="values.yaml"] |
| 147 | comChart [label="Chart.yaml"] |
| 148 | ccChart [label="Chart.yaml"] |
| 149 | mChart [label="Chart.yaml"] |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 150 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 151 | mReq [label="requirements.yaml"] |
| 152 | mService [label="service.yaml"] |
| 153 | mMap [label="configmap.yaml"] |
| 154 | ccName [label="_name.tpl"] |
| 155 | ccNS [label="_namespace.tpl"] |
| 156 | } |
| 157 | { |
| 158 | cCom [label="common"] |
| 159 | mTemp [label="templates"] |
| 160 | ccTemp [label="templates"] |
| 161 | } |
| 162 | { |
| 163 | more [label="...",style=dashed] |
| 164 | } |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 165 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 166 | common -> comValues |
| 167 | common -> comChart |
| 168 | common -> cCom |
| 169 | common -> mysql |
| 170 | common -> more |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 171 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 172 | cCom -> ccChart |
| 173 | cCom -> ccValues |
| 174 | cCom -> ccTemp |
| 175 | ccTemp -> ccName |
| 176 | ccTemp -> ccNS |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 177 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 178 | mysql -> mValues |
| 179 | mysql -> mChart |
| 180 | mysql -> mReq |
| 181 | mysql -> mTemp |
| 182 | mTemp -> mService |
| 183 | mTemp -> mMap |
| 184 | } |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 185 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 186 | The common section of charts consists of a set of templates that assist with |
| 187 | parameter substitution (`_name.tpl` and `_namespace.tpl`) and a set of charts |
| 188 | for components used throughout ONAP. Initially `mysql` is in the common area but |
| 189 | this will expand to include other databases like `mariadb-galera`, `postgres`, |
| 190 | and `cassandra`. Other candidates for common components include `redis` and |
| 191 | `kafka`. When the common components are used by other charts they are |
| 192 | instantiated each time. In subsequent ONAP releases some of the common |
| 193 | components could be a setup as services that are used by multiple ONAP |
| 194 | components thus minimizing the deployment and operational costs. |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 195 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 196 | All of the ONAP components have charts that follow the pattern shown below: |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 197 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 198 | .. graphviz:: |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 199 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 200 | digraph onap_component_chart { |
| 201 | rankdir="LR"; |
| 202 | { |
| 203 | node [shape=folder] |
| 204 | cValues [label="values.yaml"] |
| 205 | cChart [label="Chart.yaml"] |
| 206 | cService [label="service.yaml"] |
| 207 | cMap [label="configmap.yaml"] |
| 208 | cFiles [label="config file(s)"] |
| 209 | } |
| 210 | { |
| 211 | cCharts [label="charts"] |
| 212 | cTemp [label="templates"] |
| 213 | cRes [label="resources"] |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 214 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 215 | } |
| 216 | { |
| 217 | sCom [label="component",style=dashed] |
| 218 | } |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 219 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 220 | component -> cValues |
| 221 | component -> cChart |
| 222 | component -> cCharts |
| 223 | component -> cTemp |
| 224 | component -> cRes |
| 225 | cTemp -> cService |
| 226 | cTemp -> cMap |
| 227 | cRes -> config |
| 228 | config -> cFiles |
| 229 | cCharts -> sCom |
| 230 | } |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 231 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 232 | Note that the component charts may include a hierarchy of components and in |
| 233 | themselves can be quite complex. |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 234 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 235 | Configuration of the components varies somewhat from component to component but |
| 236 | generally follows the pattern of one or more `configmap.yaml` files which can |
| 237 | directly provide configuration to the containers in addition to processing |
| 238 | configuration files stored in the `config` directory. It is the responsibility |
| 239 | of each ONAP component team to update these configuration files when changes |
| 240 | are made to the project containers that impact configuration. |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 241 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 242 | The following section describes how the hierarchical ONAP configuration system is |
| 243 | key to management of such a large system. |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 244 | |
| 245 | Configuration Management |
| 246 | ======================== |
| 247 | |
| 248 | ONAP is a large system composed of many components - each of which are complex |
| 249 | systems in themselves - that needs to be deployed in a number of different |
| 250 | ways. For example, within a single operator's network there may be R&D |
| 251 | deployments under active development, pre-production versions undergoing system |
| 252 | testing and production systems that are operating live networks. Each of these |
| 253 | deployments will differ in significant ways, such as the version of the |
| 254 | software images deployed. In addition, there may be a number of application |
| 255 | specific configuration differences, such as operating system environment |
| 256 | variables. The following describes how the Helm configuration management |
| 257 | system is used within the OOM project to manage both ONAP infrastructure |
| 258 | configuration as well as ONAP components configuration. |
| 259 | |
| 260 | One of the artifacts that OOM/Kubernetes uses to deploy ONAP components is the |
| 261 | deployment specification, yet another yaml file. Within these deployment specs |
| 262 | are a number of parameters as shown in the following mariadb example: |
| 263 | |
| 264 | .. code-block:: yaml |
| 265 | |
| 266 | apiVersion: extensions/v1beta1 |
| 267 | kind: Deployment |
| 268 | metadata: |
| 269 | name: mariadb |
| 270 | spec: |
| 271 | <...> |
| 272 | template: |
| 273 | <...> |
| 274 | spec: |
| 275 | hostname: mariadb |
| 276 | containers: |
| 277 | - args: |
| 278 | image: nexus3.onap.org:10001/mariadb:10.1.11 |
| 279 | name: "mariadb" |
| 280 | env: |
| 281 | - name: MYSQL_ROOT_PASSWORD |
| 282 | value: password |
| 283 | - name: MARIADB_MAJOR |
| 284 | value: "10.1" |
| 285 | <...> |
| 286 | imagePullSecrets: |
| 287 | - name: onap-docker-registry-key |
| 288 | |
| 289 | Note that within the deployment specification, one of the container arguments |
| 290 | is the key/value pair image: nexus3.onap.org:10001/mariadb:10.1.11 which |
| 291 | specifies the version of the mariadb software to deploy. Although the |
| 292 | deployment specifications greatly simplify deployment, maintenance of the |
| 293 | deployment specifications themselves become problematic as software versions |
| 294 | change over time or as different versions are required for different |
| 295 | deployments. For example, if the R&D team needs to deploy a newer version of |
| 296 | mariadb than what is currently used in the production environment, they would |
| 297 | need to clone the deployment specification and change this value. Fortunately, |
| 298 | this problem has been solved with the templating capabilities of Helm. |
| 299 | |
| 300 | The following example shows how the deployment specifications are modified to |
| 301 | incorporate Helm templates such that key/value pairs can be defined outside of |
| 302 | the deployment specifications and passed during instantiation of the component. |
| 303 | |
| 304 | .. code-block:: yaml |
| 305 | |
| 306 | apiVersion: extensions/v1beta1 |
| 307 | kind: Deployment |
| 308 | metadata: |
| 309 | name: mariadb |
| 310 | namespace: "{{ .Values.nsPrefix }}-mso" |
| 311 | spec: |
| 312 | <...> |
| 313 | template: |
| 314 | <...> |
| 315 | spec: |
| 316 | hostname: mariadb |
| 317 | containers: |
| 318 | - args: |
| 319 | image: {{ .Values.image.mariadb }} |
| 320 | imagePullPolicy: {{ .Values.pullPolicy }} |
| 321 | name: "mariadb" |
| 322 | env: |
| 323 | - name: MYSQL_ROOT_PASSWORD |
| 324 | value: password |
| 325 | - name: MARIADB_MAJOR |
| 326 | value: "10.1" |
| 327 | <...> |
| 328 | imagePullSecrets: |
| 329 | - name: "{{ .Values.nsPrefix }}-docker-registry-key"apiVersion: extensions/v1beta1 |
| 330 | kind: Deployment |
| 331 | metadata: |
| 332 | name: mariadb |
| 333 | namespace: "{{ .Values.nsPrefix }}-mso" |
| 334 | spec: |
| 335 | <...> |
| 336 | template: |
| 337 | <...> |
| 338 | spec: |
| 339 | hostname: mariadb |
| 340 | containers: |
| 341 | - args: |
| 342 | image: {{ .Values.image.mariadb }} |
| 343 | imagePullPolicy: {{ .Values.pullPolicy }} |
| 344 | name: "mariadb" |
| 345 | env: |
| 346 | - name: MYSQL_ROOT_PASSWORD |
| 347 | value: password |
| 348 | - name: MARIADB_MAJOR |
| 349 | value: "10.1" |
| 350 | <...> |
| 351 | imagePullSecrets: |
| 352 | - name: "{{ .Values.nsPrefix }}-docker-registry-key" |
| 353 | |
| 354 | This version of the deployment specification has gone through the process of |
| 355 | templating values that are likely to change between deployments. Note that the |
| 356 | image is now specified as: image: {{ .Values.image.mariadb }} instead of a |
| 357 | string used previously. During the deployment phase, Helm (actually the Helm |
| 358 | sub-component Tiller) substitutes the {{ .. }} entries with a variable defined |
| 359 | in a values.yaml file. The content of this file is as follows: |
| 360 | |
| 361 | .. code-block:: yaml |
| 362 | |
| 363 | nsPrefix: onap |
| 364 | pullPolicy: IfNotPresent |
| 365 | image: |
| 366 | readiness: oomk8s/readiness-check:1.0.0 |
| 367 | mso: nexus3.onap.org:10001/openecomp/mso:1.0-STAGING-latest |
| 368 | mariadb: nexus3.onap.org:10001/mariadb:10.1.11 |
| 369 | |
| 370 | Within the values.yaml file there is an image section with the key/value pair |
| 371 | mariadb: nexus3.onap.org:10001/mariadb:10.1.11 which is the same value used in |
| 372 | the non-templated version. Once all of the substitutions are complete, the |
| 373 | resulting deployment specification ready to be used by Kubernetes. |
| 374 | |
| 375 | Also note that in this example, the namespace key/value pair is specified in |
| 376 | the values.yaml file. This key/value pair will be global across the entire |
| 377 | ONAP deployment and is therefore a prime example of where configuration |
| 378 | hierarchy can be very useful. |
| 379 | |
| 380 | When creating a deployment template consider the use of default values if |
| 381 | appropriate. Helm templating has built in support for DEFAULT values, here is |
| 382 | an example: |
| 383 | |
| 384 | .. code-block:: yaml |
| 385 | |
| 386 | imagePullSecrets: |
| 387 | - name: "{{ .Values.nsPrefix | default "onap" }}-docker-registry-key" |
| 388 | |
| 389 | The pipeline operator ("|") used here hints at that power of Helm templates in |
| 390 | that much like an operating system command line the pipeline operator allow |
| 391 | over 60 Helm functions to be embedded directly into the template (note that the |
| 392 | Helm template language is a superset of the Go template language). These |
| 393 | functions include simple string operations like upper and more complex flow |
| 394 | control operations like if/else. |
| 395 | |
| 396 | |
| 397 | ONAP Application Configuration |
| 398 | ------------------------------ |
| 399 | |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 400 | Dependency Management |
| 401 | --------------------- |
| 402 | These Helm charts describe the desired state |
| 403 | of an ONAP deployment and instruct the Kubernetes container manager as to how |
| 404 | to maintain the deployment in this state. These dependencies dictate the order |
| 405 | in-which the containers are started for the first time such that such |
| 406 | dependencies are always met without arbitrary sleep times between container |
| 407 | startups. For example, the SDC back-end container requires the Elastic-Search, |
| 408 | Cassandra and Kibana containers within SDC to be ready and is also dependent on |
| 409 | DMaaP (or the message-router) to be ready - where ready implies the built-in |
| 410 | "readiness" probes succeeded - before becoming fully operational. When an |
| 411 | initial deployment of ONAP is requested the current state of the system is NULL |
| 412 | so ONAP is deployed by the Kubernetes manager as a set of Docker containers on |
| 413 | one or more predetermined hosts. The hosts could be physical machines or |
| 414 | virtual machines. When deploying on virtual machines the resulting system will |
| 415 | be very similar to "Heat" based deployments, i.e. Docker containers running |
| 416 | within a set of VMs, the primary difference being that the allocation of |
| 417 | containers to VMs is done dynamically with OOM and statically with "Heat". |
| 418 | Example SO deployment descriptor file shows SO's dependency on its mariadb |
| 419 | data-base component: |
| 420 | |
| 421 | SO deployment specification excerpt: |
| 422 | |
| 423 | .. code-block:: yaml |
| 424 | |
| 425 | apiVersion: extensions/v1beta1 |
| 426 | kind: Deployment |
| 427 | metadata: |
| 428 | name: {{ include "common.name" . }} |
| 429 | namespace: {{ include "common.namespace" . }} |
| 430 | labels: |
| 431 | app: {{ include "common.name" . }} |
| 432 | chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} |
| 433 | release: {{ .Release.Name }} |
| 434 | heritage: {{ .Release.Service }} |
| 435 | spec: |
| 436 | replicas: {{ .Values.replicaCount }} |
| 437 | template: |
| 438 | metadata: |
| 439 | labels: |
| 440 | app: {{ include "common.name" . }} |
| 441 | release: {{ .Release.Name }} |
| 442 | spec: |
| 443 | initContainers: |
| 444 | - command: |
| 445 | - /root/ready.py |
| 446 | args: |
| 447 | - --container-name |
| 448 | - so-mariadb |
| 449 | env: |
| 450 | ... |
| 451 | |
| 452 | Kubernetes Container Orchestration |
| 453 | ================================== |
| 454 | The ONAP components are managed by the Kubernetes_ container management system |
| 455 | which maintains the desired state of the container system as described by one |
| 456 | or more deployment descriptors - similar in concept to OpenStack HEAT |
| 457 | Orchestration Templates. The following sections describe the fundamental |
| 458 | objects managed by Kubernetes, the network these components use to communicate |
| 459 | with each other and other entities outside of ONAP and the templates that |
| 460 | describe the configuration and desired state of the ONAP components. |
| 461 | |
| 462 | Name Spaces |
| 463 | ----------- |
| 464 | Within the namespaces are Kubernetes services that provide external connectivity to pods that host Docker containers. |
| 465 | |
| 466 | ONAP Components to Kubernetes Object Relationships |
| 467 | -------------------------------------------------- |
| 468 | Kubernetes deployments consist of multiple objects: |
| 469 | |
| 470 | - **nodes** - a worker machine - either physical or virtual - that hosts |
| 471 | multiple containers managed by Kubernetes. |
| 472 | - **services** - an abstraction of a logical set of pods that provide a |
| 473 | micro-service. |
| 474 | - **pods** - one or more (but typically one) container(s) that provide specific |
| 475 | application functionality. |
| 476 | - **persistent volumes** - One or more permanent volumes need to be established |
| 477 | to hold non-ephemeral configuration and state data. |
| 478 | |
| 479 | The relationship between these objects is shown in the following figure: |
| 480 | |
| 481 | .. .. uml:: |
| 482 | .. |
| 483 | .. @startuml |
| 484 | .. node PH { |
| 485 | .. component Service { |
| 486 | .. component Pod0 |
| 487 | .. component Pod1 |
| 488 | .. } |
| 489 | .. } |
| 490 | .. |
| 491 | .. database PV |
| 492 | .. @enduml |
| 493 | |
| 494 | .. figure:: kubernetes_objects.png |
| 495 | |
| 496 | OOM uses these Kubernetes objects as described in the following sections. |
| 497 | |
| 498 | Nodes |
| 499 | ~~~~~ |
| 500 | OOM works with both physical and virtual worker machines. |
| 501 | |
| 502 | * Virtual Machine Deployments - If ONAP is to be deployed onto a set of virtual |
| 503 | machines, the creation of the VMs is outside of the scope of OOM and could be |
| 504 | done in many ways, such as |
| 505 | |
| 506 | * manually, for example by a user using the OpenStack Horizon dashboard or |
| 507 | AWS EC2, or |
| 508 | * automatically, for example with the use of a OpenStack Heat Orchestration |
| 509 | Template which builds an ONAP stack, Azure ARM template, AWS CloudFormation |
| 510 | Template, or |
| 511 | * orchestrated, for example with Cloudify creating the VMs from a TOSCA |
| 512 | template and controlling their life cycle for the life of the ONAP |
| 513 | deployment. |
| 514 | |
| 515 | * Physical Machine Deployments - If ONAP is to be deployed onto physical |
| 516 | machines there are several options but the recommendation is to use Rancher |
| 517 | along with Helm to associate hosts with a Kubernetes cluster. |
| 518 | |
| 519 | Pods |
| 520 | ~~~~ |
| 521 | A group of containers with shared storage and networking can be grouped |
| 522 | together into a Kubernetes pod. All of the containers within a pod are |
| 523 | co-located and co-scheduled so they operate as a single unit. Within ONAP |
| 524 | Amsterdam release, pods are mapped one-to-one to docker containers although |
| 525 | this may change in the future. As explained in the Services section below the |
| 526 | use of Pods within each ONAP component is abstracted from other ONAP |
| 527 | components. |
| 528 | |
| 529 | Services |
| 530 | ~~~~~~~~ |
| 531 | OOM uses the Kubernetes service abstraction to provide a consistent access |
| 532 | point for each of the ONAP components independent of the pod or container |
| 533 | architecture of that component. For example, the SDNC component may introduce |
| 534 | OpenDaylight clustering as some point and change the number of pods in this |
| 535 | component to three or more but this change will be isolated from the other ONAP |
| 536 | components by the service abstraction. A service can include a load balancer |
| 537 | on its ingress to distribute traffic between the pods and even react to dynamic |
| 538 | changes in the number of pods if they are part of a replica set. |
| 539 | |
| 540 | Persistent Volumes |
| 541 | ~~~~~~~~~~~~~~~~~~ |
| 542 | To enable ONAP to be deployed into a wide variety of cloud infrastructures a |
| 543 | flexible persistent storage architecture, built on Kubernetes persistent |
| 544 | volumes, provides the ability to define the physical storage in a central |
| 545 | location and have all ONAP components securely store their data. |
| 546 | |
| 547 | When deploying ONAP into a public cloud, available storage services such as |
| 548 | `AWS Elastic Block Store`_, `Azure File`_, or `GCE Persistent Disk`_ are |
| 549 | options. Alternatively, when deploying into a private cloud the storage |
| 550 | architecture might consist of Fiber Channel, `Gluster FS`_, or iSCSI. Many |
| 551 | other storage options existing, refer to the `Kubernetes Storage Class`_ |
| 552 | documentation for a full list of the options. The storage architecture may vary |
| 553 | from deployment to deployment but in all cases a reliable, redundant storage |
| 554 | system must be provided to ONAP with which the state information of all ONAP |
| 555 | components will be securely stored. The Storage Class for a given deployment is |
| 556 | a single parameter listed in the ONAP values.yaml file and therefore is easily |
| 557 | customized. Operation of this storage system is outside the scope of the OOM. |
| 558 | |
| 559 | .. code-block:: yaml |
| 560 | |
| 561 | Insert values.yaml code block with storage block here |
| 562 | |
| 563 | Once the storage class is selected and the physical storage is provided, the |
| 564 | ONAP deployment step creates a pool of persistent volumes within the given |
| 565 | physical storage that is used by all of the ONAP components. ONAP components |
| 566 | simply make a claim on these persistent volumes (PV), with a persistent volume |
| 567 | claim (PVC), to gain access to their storage. |
| 568 | |
| 569 | The following figure illustrates the relationships between the persistent |
| 570 | volume claims, the persistent volumes, the storage class, and the physical |
| 571 | storage. |
| 572 | |
| 573 | .. graphviz:: |
| 574 | |
| 575 | digraph PV { |
| 576 | label = "Persistance Volume Claim to Physical Storage Mapping" |
| 577 | { |
| 578 | node [shape=cylinder] |
| 579 | D0 [label="Drive0"] |
| 580 | D1 [label="Drive1"] |
| 581 | Dx [label="Drivex"] |
| 582 | } |
| 583 | { |
| 584 | node [shape=Mrecord label="StorageClass:ceph"] |
| 585 | sc |
| 586 | } |
| 587 | { |
| 588 | node [shape=point] |
| 589 | p0 p1 p2 |
| 590 | p3 p4 p5 |
| 591 | } |
| 592 | subgraph clusterSDC { |
| 593 | label="SDC" |
| 594 | PVC0 |
| 595 | PVC1 |
| 596 | } |
| 597 | subgraph clusterSDNC { |
| 598 | label="SDNC" |
| 599 | PVC2 |
| 600 | } |
| 601 | subgraph clusterSO { |
| 602 | label="SO" |
| 603 | PVCn |
| 604 | } |
| 605 | PV0 -> sc |
| 606 | PV1 -> sc |
| 607 | PV2 -> sc |
| 608 | PVn -> sc |
| 609 | |
| 610 | sc -> {D0 D1 Dx} |
| 611 | PVC0 -> PV0 |
| 612 | PVC1 -> PV1 |
| 613 | PVC2 -> PV2 |
| 614 | PVCn -> PVn |
| 615 | |
| 616 | # force all of these nodes to the same line in the given order |
| 617 | subgraph { |
| 618 | rank = same; PV0;PV1;PV2;PVn;p0;p1;p2 |
| 619 | PV0->PV1->PV2->p0->p1->p2->PVn [style=invis] |
| 620 | } |
| 621 | |
| 622 | subgraph { |
| 623 | rank = same; D0;D1;Dx;p3;p4;p5 |
| 624 | D0->D1->p3->p4->p5->Dx [style=invis] |
| 625 | } |
| 626 | |
| 627 | } |
| 628 | |
| 629 | In-order for an ONAP component to use a persistent volume it must make a claim |
| 630 | against a specific persistent volume defined in the ONAP common charts. Note |
| 631 | that there is a one-to-one relationship between a PVC and PV. The following is |
| 632 | an excerpt from a component chart that defines a PVC: |
| 633 | |
| 634 | .. code-block:: yaml |
| 635 | |
| 636 | Insert PVC example here |
| 637 | |
| 638 | OOM Networking with Kubernetes |
| 639 | ------------------------------ |
| 640 | |
| 641 | - DNS |
| 642 | - Ports - Flattening the containers also expose port conflicts between the containers which need to be resolved. |
| 643 | |
| 644 | Node Ports |
| 645 | ~~~~~~~~~~ |
| 646 | |
| 647 | Pod Placement Rules |
| 648 | ------------------- |
| 649 | OOM will use the rich set of Kubernetes node and pod affinity / |
| 650 | anti-affinity rules to minimize the chance of a single failure resulting in a |
| 651 | loss of ONAP service. Node affinity / anti-affinity is used to guide the |
| 652 | Kubernetes orchestrator in the placement of pods on nodes (physical or virtual |
| 653 | machines). For example: |
| 654 | |
| 655 | - if a container used Intel DPDK technology the pod may state that it as |
| 656 | affinity to an Intel processor based node, or |
| 657 | - geographical based node labels (such as the Kubernetes standard zone or |
| 658 | region labels) may be used to ensure placement of a DCAE complex close to the |
| 659 | VNFs generating high volumes of traffic thus minimizing networking cost. |
| 660 | Specifically, if nodes were pre-assigned labels East and West, the pod |
| 661 | deployment spec to distribute pods to these nodes would be: |
| 662 | |
| 663 | .. code-block:: yaml |
| 664 | |
| 665 | nodeSelector: |
| 666 | failure-domain.beta.Kubernetes.io/region: {{ .Values.location }} |
| 667 | |
| 668 | - "location: West" is specified in the `values.yaml` file used to deploy |
| 669 | one DCAE cluster and "location: East" is specified in a second `values.yaml` |
| 670 | file (see OOM Configuration Management for more information about |
| 671 | configuration files like the `values.yaml` file). |
| 672 | |
| 673 | Node affinity can also be used to achieve geographic redundancy if pods are |
| 674 | assigned to multiple failure domains. For more information refer to `Assigning |
| 675 | Pods to Nodes`_. |
| 676 | |
| 677 | .. note:: |
| 678 | One could use Pod to Node assignment to totally constrain Kubernetes when |
| 679 | doing initial container assignment to replicate the Amsterdam release |
| 680 | OpenStack Heat based deployment. Should one wish to do this, each VM would |
| 681 | need a unique node name which would be used to specify a node constaint |
| 682 | for every component. These assignment could be specified in an environment |
| 683 | specific values.yaml file. Constraining Kubernetes in this way is not |
| 684 | recommended. |
| 685 | |
| 686 | Kubernetes has a comprehensive system called Taints and Tolerations that can be |
| 687 | used to force the container orchestrator to repel pods from nodes based on |
| 688 | static events (an administrator assigning a taint to a node) or dynamic events |
| 689 | (such as a node becoming unreachable or running out of disk space). There are |
| 690 | no plans to use taints or tolerations in the ONAP Beijing release. Pod |
| 691 | affinity / anti-affinity is the concept of creating a spacial relationship |
| 692 | between pods when the Kubernetes orchestrator does assignment (both initially |
| 693 | an in operation) to nodes as explained in Inter-pod affinity and anti-affinity. |
| 694 | For example, one might choose to co-located all of the ONAP SDC containers on a |
| 695 | single node as they are not critical runtime components and co-location |
| 696 | minimizes overhead. On the other hand, one might choose to ensure that all of |
| 697 | the containers in an ODL cluster (SDNC and APPC) are placed on separate nodes |
| 698 | such that a node failure has minimal impact to the operation of the cluster. |
| 699 | An example of how pod affinity / anti-affinity is shown below: |
| 700 | |
| 701 | Pod Affinity / Anti-Affinity |
| 702 | |
| 703 | .. code-block:: yaml |
| 704 | |
| 705 | apiVersion: v1 |
| 706 | kind: Pod |
| 707 | metadata: |
| 708 | name: with-pod-affinity |
| 709 | spec: |
| 710 | affinity: |
| 711 | podAffinity: |
| 712 | requiredDuringSchedulingIgnoredDuringExecution: |
| 713 | - labelSelector: |
| 714 | matchExpressions: |
| 715 | - key: security |
| 716 | operator: In |
| 717 | values: |
| 718 | - S1 |
| 719 | topologyKey: failure-domain.beta.Kubernetes.io/zone |
| 720 | podAntiAffinity: |
| 721 | preferredDuringSchedulingIgnoredDuringExecution: |
| 722 | - weight: 100 |
| 723 | podAffinityTerm: |
| 724 | labelSelector: |
| 725 | matchExpressions: |
| 726 | - key: security |
| 727 | operator: In |
| 728 | values: |
| 729 | - S2 |
| 730 | topologyKey: Kubernetes.io/hostname |
| 731 | containers: |
| 732 | - name: with-pod-affinity |
| 733 | image: gcr.io/google_containers/pause:2.0 |
| 734 | |
| 735 | This example contains both podAffinity and podAntiAffinity rules, the first |
| 736 | rule is is a must (requiredDuringSchedulingIgnoredDuringExecution) while the |
| 737 | second will be met pending other considerations |
| 738 | (preferredDuringSchedulingIgnoredDuringExecution). Preemption Another feature |
| 739 | that may assist in achieving a repeatable deployment in the presence of faults |
| 740 | that may have reduced the capacity of the cloud is assigning priority to the |
| 741 | containers such that mission critical components have the ability to evict less |
| 742 | critical components. Kubernetes provides this capability with Pod Priority and |
| 743 | Preemption. Prior to having more advanced production grade features available, |
| 744 | the ability to at least be able to re-deploy ONAP (or a subset of) reliably |
| 745 | provides a level of confidence that should an outage occur the system can be |
| 746 | brought back on-line predictably. |
| 747 | |
| 748 | Health Checks |
| 749 | ------------- |
| 750 | |
| 751 | Monitoring of ONAP components is configured in the agents within JSON files and |
| 752 | stored in gerrit under the consul-agent-config, here is an example from the AAI |
| 753 | model loader (aai-model-loader-health.json): |
| 754 | |
| 755 | .. code-block:: json |
| 756 | |
| 757 | { |
| 758 | "service": { |
| 759 | "name": "A&AI Model Loader", |
| 760 | "checks": [ |
| 761 | { |
| 762 | "id": "model-loader-process", |
| 763 | "name": "Model Loader Presence", |
| 764 | "script": "/consul/config/scripts/model-loader-script.sh", |
| 765 | "interval": "15s", |
| 766 | "timeout": "1s" |
| 767 | } |
| 768 | ] |
| 769 | } |
| 770 | } |
| 771 | |
| 772 | Liveness Probes |
| 773 | --------------- |
| 774 | |
| 775 | These liveness probes can simply check that a port is available, that a |
| 776 | built-in health check is reporting good health, or that the Consul health check |
| 777 | is positive. For example, to monitor the SDNC component has following liveness |
| 778 | probe can be found in the SDNC DB deployment specification: |
| 779 | |
| 780 | .. code-block:: yaml |
| 781 | |
| 782 | sdnc db liveness probe |
| 783 | |
| 784 | livenessProbe: |
| 785 | exec: |
| 786 | command: ["mysqladmin", "ping"] |
| 787 | initialDelaySeconds: 30 periodSeconds: 10 |
| 788 | timeoutSeconds: 5 |
| 789 | |
| 790 | The 'initialDelaySeconds' control the period of time between the readiness |
| 791 | probe succeeding and the liveness probe starting. 'periodSeconds' and |
| 792 | 'timeoutSeconds' control the actual operation of the probe. Note that |
| 793 | containers are inherently ephemeral so the healing action destroys failed |
| 794 | containers and any state information within it. To avoid a loss of state, a |
| 795 | persistent volume should be used to store all data that needs to be persisted |
| 796 | over the re-creation of a container. Persistent volumes have been created for |
| 797 | the database components of each of the projects and the same technique can be |
| 798 | used for all persistent state information. |
| 799 | |
| 800 | |
| 801 | |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 802 | Environment Files |
| 803 | ~~~~~~~~~~~~~~~~~ |
| 804 | |
| 805 | MSB Integration |
| 806 | =============== |
Roger Maitland | ac64381 | 2018-03-28 09:52:34 -0400 | [diff] [blame] | 807 | |
| 808 | The \ `Microservices Bus |
| 809 | Project <https://wiki.onap.org/pages/viewpage.action?pageId=3246982>`__ provides |
| 810 | facilities to integrate micro-services into ONAP and therefore needs to |
| 811 | integrate into OOM - primarily through Consul which is the backend of |
| 812 | MSB service discovery. The following is a brief description of how this |
| 813 | integration will be done: |
| 814 | |
| 815 | A registrator to push the service endpoint info to MSB service |
| 816 | discovery. |
| 817 | |
| 818 | - The needed service endpoint info is put into the kubernetes yaml file |
| 819 | as annotation, including service name, Protocol,version, visual |
| 820 | range,LB method, IP, Port,etc. |
| 821 | |
| 822 | - OOM deploy/start/restart/scale in/scale out/upgrade ONAP components |
| 823 | |
| 824 | - Registrator watch the kubernetes event |
| 825 | |
| 826 | - When an ONAP component instance has been started/destroyed by OOM, |
| 827 | Registrator get the notification from kubernetes |
| 828 | |
| 829 | - Registrator parse the service endpoint info from annotation and |
| 830 | register/update/unregister it to MSB service discovery |
| 831 | |
| 832 | - MSB API Gateway uses the service endpoint info for service routing |
| 833 | and load balancing. |
| 834 | |
| 835 | Details of the registration service API can be found at \ `Microservice |
| 836 | Bus API |
| 837 | Documentation <https://wiki.onap.org/display/DW/Microservice+Bus+API+Documentation>`__. |
| 838 | |
| 839 | ONAP Component Registration to MSB |
| 840 | ---------------------------------- |
| 841 | The charts of all ONAP components intending to register against MSB must have |
| 842 | an annotation in their service(s) template. A `sdc` example follows: |
| 843 | |
| 844 | .. code-block:: yaml |
| 845 | |
| 846 | apiVersion: v1 |
| 847 | kind: Service |
| 848 | metadata: |
| 849 | labels: |
| 850 | app: sdc-be |
| 851 | name: sdc-be |
| 852 | namespace: "{{ .Values.nsPrefix }}" |
| 853 | annotations: |
| 854 | msb.onap.org/service-info: '[ |
| 855 | { |
| 856 | "serviceName": "sdc", |
| 857 | "version": "v1", |
| 858 | "url": "/sdc/v1", |
| 859 | "protocol": "REST", |
| 860 | "port": "8080", |
| 861 | "visualRange":"1" |
| 862 | }, |
| 863 | { |
| 864 | "serviceName": "sdc-deprecated", |
| 865 | "version": "v1", |
| 866 | "url": "/sdc/v1", |
| 867 | "protocol": "REST", |
| 868 | "port": "8080", |
| 869 | "visualRange":"1", |
| 870 | "path":"/sdc/v1" |
| 871 | } |
| 872 | ]' |
| 873 | ... |
| 874 | |
| 875 | |
| 876 | MSB Integration with OOM |
| 877 | ------------------------ |
| 878 | A preliminary view of the OOM-MSB integration is as follows: |
| 879 | |
| 880 | .. figure:: MSB-OOM-Diagram.png |
| 881 | |
| 882 | A message sequence chart of the registration process: |
| 883 | |
| 884 | .. uml:: |
| 885 | |
| 886 | participant "OOM" as oom |
| 887 | participant "ONAP Component" as onap |
| 888 | participant "Service Discovery" as sd |
| 889 | participant "External API Gateway" as eagw |
| 890 | participant "Router (Internal API Gateway)" as iagw |
| 891 | |
| 892 | box "MSB" #LightBlue |
| 893 | participant sd |
| 894 | participant eagw |
| 895 | participant iagw |
| 896 | end box |
| 897 | |
| 898 | == Deploy Servcie == |
| 899 | |
| 900 | oom -> onap: Deploy |
| 901 | oom -> sd: Register service endpoints |
| 902 | sd -> eagw: Services exposed to external system |
| 903 | sd -> iagw: Services for internal use |
| 904 | |
| 905 | == Component Life-cycle Management == |
| 906 | |
| 907 | oom -> onap: Start/Stop/Scale/Migrate/Upgrade |
| 908 | oom -> sd: Update service info |
| 909 | sd -> eagw: Update service info |
| 910 | sd -> iagw: Update service info |
| 911 | |
| 912 | == Service Health Check == |
| 913 | |
| 914 | sd -> onap: Check the health of service |
| 915 | sd -> eagw: Update service status |
| 916 | sd -> iagw: Update service status |
| 917 | |
| 918 | |
| 919 | MSB Deployment Instructions |
| 920 | --------------------------- |
| 921 | MSB is helm installable ONAP component which is often automatically deployed. |
| 922 | To install it individually enter:: |
| 923 | |
| 924 | > helm install <repo-name>/msb |
| 925 | |
| 926 | .. note:: |
| 927 | TBD: Vaidate if the following procedure is still required. |
| 928 | |
| 929 | Please note that Kubernetes authentication token must be set at |
| 930 | *kubernetes/kube2msb/values.yaml* so the kube2msb registrator can get the |
| 931 | access to watch the kubernetes events and get service annotation by |
| 932 | Kubernetes APIs. The token can be found in the kubectl configuration file |
| 933 | *~/.kube/config* |
| 934 | |
| 935 | More details can be found here `MSB installation <http://onap.readthedocs.io/en/latest/submodules/msb/apigateway.git/docs/platform/installation.html>`__. |
| 936 | |
Roger Maitland | 953b5f1 | 2018-03-22 15:24:04 -0400 | [diff] [blame] | 937 | .. MISC |
| 938 | .. ==== |
| 939 | .. Note that although OOM uses Kubernetes facilities to minimize the effort |
| 940 | .. required of the ONAP component owners to implement a successful rolling upgrade |
| 941 | .. strategy there are other considerations that must be taken into consideration. |
| 942 | .. For example, external APIs - both internal and external to ONAP - should be |
| 943 | .. designed to gracefully accept transactions from a peer at a different software |
| 944 | .. version to avoid deadlock situations. Embedded version codes in messages may |
| 945 | .. facilitate such capabilities. |
| 946 | .. |
| 947 | .. Within each of the projects a new configuration repository contains all of the |
| 948 | .. project specific configuration artifacts. As changes are made within the |
| 949 | .. project, it's the responsibility of the project team to make appropriate |
| 950 | .. changes to the configuration data. |