rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 1 | .. This work is licensed under a Creative Commons Attribution 4.0 International License. |
| 2 | |
| 3 | OOM User Guide |
| 4 | ############## |
| 5 | .. contents:: |
| 6 | :depth: 3 |
| 7 | .. |
| 8 | |
| 9 | Introduction |
| 10 | ============ |
| 11 | |
| 12 | The ONAP Operations Manager (OOM) is responsible for life-cycle |
| 13 | management of the ONAP platform itself; components such as MSO, SDNC, |
| 14 | etc. It is not responsible for the management of services, VNFs or |
| 15 | infrastructure instantiated by ONAP or used by ONAP to host such |
| 16 | services or VNFs. OOM uses the open-source Kubernetes container |
| 17 | management system as a means to manage the Docker containers that |
| 18 | compose ONAP where the containers are hosted either directly on |
| 19 | bare-metal servers or on VMs hosted by a 3rd party management system. |
| 20 | OOM ensures that ONAP is easily deployable and maintainable throughout |
| 21 | its life cycle while using hardware resources efficiently. There are two |
| 22 | deployment options for OOM: |
| 23 | |
| 24 | - A minimal deployment where single instances of the ONAP components |
| 25 | are instantiated with no resource reservations, and |
| 26 | |
| 27 | - | A production deployment where ONAP components are deployed with |
| 28 | redundancy and anti-affinity rules such that single faults do not |
| 29 | interrupt ONAP operation. |
| 30 | | When deployed as containers directly on bare-metal, the minimal |
| 31 | deployment option requires a single host (32GB memory with 12 |
| 32 | vCPUs) however further optimization should allow this deployment to |
| 33 | target a laptop computer. Production deployments will require more |
| 34 | resources as determined by anti-affinity and geo-redundancy |
| 35 | requirements. |
| 36 | |
| 37 | OOM deployments of ONAP provide many benefits: |
| 38 | |
| 39 | - Life-cycle Management Kubernetes is a comprehensive system for |
| 40 | managing the life-cycle of containerized applications. Its use as a |
| 41 | platform manager will ease the deployment of ONAP, provide fault |
| 42 | tolerance and horizontal scalability, and enable seamless upgrades. |
| 43 | |
| 44 | - Hardware Efficiency ONAP can be deployed on a single host using less |
| 45 | than 32GB of memory. As opposed to VMs that require a guest operating |
| 46 | system be deployed along with the application, containers provide |
| 47 | similar application encapsulation with neither the computing, memory |
| 48 | and storage overhead nor the associated long term support costs of |
| 49 | those guest operating systems. An informal goal of the project is to |
| 50 | be able to create a development deployment of ONAP that can be hosted |
| 51 | on a laptop. |
| 52 | |
| 53 | - Rapid Deployment With locally cached images ONAP can be deployed from |
| 54 | scratch in 7 minutes. Eliminating the guest operating system results |
| 55 | in containers coming into service much faster than a VM equivalent. |
| 56 | This advantage can be particularly useful for ONAP where rapid |
| 57 | reaction to inevitable failures will be critical in production |
| 58 | environments. |
| 59 | |
| 60 | - Portability OOM takes advantage of Kubernetes' ability to be hosted |
| 61 | on multiple hosted cloud solutions like Google Compute Engine, AWS |
| 62 | EC2, Microsoft Azure, CenturyLink Cloud, IBM Bluemix and more. |
| 63 | |
| 64 | - Minimal Impact As ONAP is already deployed with Docker containers |
| 65 | minimal changes are required to the components themselves when |
| 66 | deployed with OOM. |
| 67 | |
| 68 | Features of OOM: |
| 69 | |
| 70 | - Platform Deployment Automated deployment/un-deployment of ONAP |
| 71 | instance(s) / Automated deployment/un-deployment of individual |
| 72 | platform components using docker containers & kubernetes |
| 73 | |
| 74 | - Platform Monitoring & healing Monitor platform state, Platform health |
| 75 | checks, fault tolerance and self-healing using docker containers & |
| 76 | kubernetes |
| 77 | |
| 78 | - Platform Scaling Platform horizontal scalability through using docker |
| 79 | containers & kubernetes |
| 80 | |
| 81 | - Platform Upgrades Platform upgrades using docker containers & |
| 82 | kubernetes |
| 83 | |
| 84 | - Platform Configurations Manage overall platform components |
| 85 | configurations using docker containers & kubernetes |
| 86 | |
| 87 | - | Platform migrations Manage migration of platform components using |
| 88 | docker containers & kubernetes |
| 89 | | Please note that the ONAP Operations Manager does not provide |
| 90 | support for containerization of services or VNFs that are managed |
| 91 | by ONAP; the OOM orchestrates the life-cycle of the ONAP platform |
| 92 | components themselves. |
| 93 | |
| 94 | Container Background |
| 95 | -------------------- |
| 96 | |
| 97 | Linux containers allow for an application and all of its operating |
| 98 | system dependencies to be packaged and deployed as a single unit without |
| 99 | including a guest operating system as done with virtual machines. The |
| 100 | most popular container solution |
| 101 | is \ `Docker <https://www.docker.com/>`__ which provides tools for |
| 102 | container management like the Docker Host (dockerd) which can create, |
| 103 | run, stop, move, or delete a container. Docker has a very popular |
| 104 | registry of containers images that can be used by any Docker system; |
| 105 | however, in the ONAP context, Docker images are built by the standard |
| 106 | CI/CD flow and stored |
| 107 | in \ `Nexus <https://nexus.onap.org/#welcome>`__ repositories. OOM uses |
| 108 | the "standard" ONAP docker containers and three new ones specifically |
| 109 | created for OOM. |
| 110 | |
| 111 | Containers are isolated from each other primarily via name spaces within |
| 112 | the Linux kernel without the need for multiple guest operating systems. |
| 113 | As such, multiple containers can be deployed with little overhead such |
| 114 | as all of ONAP can be deployed on a single host. With some optimization |
| 115 | of the ONAP components (e.g. elimination of redundant database |
| 116 | instances) it may be possible to deploy ONAP on a single laptop |
| 117 | computer. |
| 118 | |
| 119 | Life Cycle Management via Kubernetes |
| 120 | ==================================== |
| 121 | |
| 122 | As with the VNFs deployed by ONAP, the components of ONAP have their own |
| 123 | life-cycle where the components are created, run, healed, scaled, |
| 124 | stopped and deleted. These life-cycle operations are managed by |
| 125 | the \ `Kubernetes <https://kubernetes.io/>`__ container management |
| 126 | system which maintains the desired state of the container system as |
| 127 | described by one or more deployment descriptors - similar in concept to |
| 128 | OpenStack HEAT Orchestration Templates. The following sections describe |
| 129 | the fundamental objects managed by Kubernetes, the network these |
| 130 | components use to communicate with each other and other entities outside |
| 131 | of ONAP and the templates that describe the configuration and desired |
| 132 | state of the ONAP components. |
| 133 | |
| 134 | ONAP Components to Kubernetes Object Relationships |
| 135 | -------------------------------------------------- |
| 136 | |
| 137 | Kubernetes deployments consist of multiple objects: |
| 138 | |
| 139 | - nodes - a worker machine - either physical or virtual - that hosts |
| 140 | multiple containers managed by kubernetes. |
| 141 | |
| 142 | - services - an abstraction of a logical set of pods that provide a |
| 143 | micro-service. |
| 144 | |
| 145 | - pods - one or more (but typically one) container(s) that provide |
| 146 | specific application functionality. |
| 147 | |
| 148 | - persistent volumes - One or more permanent volumes need to be |
| 149 | established to hold non-ephemeral configuration and state data. |
| 150 | |
| 151 | The relationship between these objects is shown in the following figure: |
| 152 | |
| 153 | .. figure:: ../kubernetes_objects.png |
| 154 | |
| 155 | OOM uses these kubernetes objects as described in the following |
| 156 | sections. |
| 157 | |
| 158 | Nodes |
| 159 | ~~~~~ |
| 160 | |
| 161 | OOM works with both physical and virtual worker machines. |
| 162 | |
| 163 | - Virtual Machine Deployments - If ONAP is to be deployed onto a set of |
| 164 | virtual machines, the creation of the VMs is outside of the scope of |
| 165 | OOM and could be done in many ways, such as: |
| 166 | |
| 167 | - manually, for example by a user using the OpenStack Horizon |
| 168 | dashboard or `AWS |
| 169 | EC2 <https://wiki.onap.org/display/DW/ONAP+on+AWS#ONAPonAWS-Option0:DeployOOMKubernetestoaspotVM>`__, |
| 170 | or |
| 171 | |
| 172 | - automatically, for example with the use of a OpenStack Heat |
| 173 | Orchestration Template which builds an ONAP stack, or |
| 174 | |
| 175 | - orchestrated, for example with Cloudify creating the VMs from a |
| 176 | TOSCA template and controlling their life cycle for the life of |
| 177 | the ONAP deployment. |
| 178 | |
| 179 | - Physical Machine Deployments - If ONAP is to be deployed onto |
| 180 | physical machines there are several options but the recommendation is |
| 181 | to use |
| 182 | `Rancher <http://rancher.com/docs/rancher/v1.6/en/quick-start-guide/>`__ |
| 183 | along with `Helm <https://github.com/kubernetes/helm/releases>`__ to |
| 184 | associate hosts with a kubernetes cluster. |
| 185 | |
| 186 | Pods |
| 187 | ~~~~ |
| 188 | |
| 189 | A group of containers with shared storage and networking can be grouped |
| 190 | together into a kubernetes pod. All of the containers within a pod are |
| 191 | co-located and co-scheduled so they operate as a single unit. Within |
| 192 | ONAP Amsterdam release, pods are mapped one-to-one to docker containers |
| 193 | although this may change in the future. As explained in the Services |
| 194 | section below the use of Pods within each ONAP component is abstracted |
| 195 | from other ONAP components. |
| 196 | |
| 197 | Services |
| 198 | ~~~~~~~~ |
| 199 | |
| 200 | OOM uses the kubernetes service abstraction to provide a consistent |
| 201 | access point for each of the ONAP components independent of the pod or |
| 202 | container architecture of that component. For example, the SDNC |
| 203 | component may introduce OpenDaylight clustering as some point and change |
| 204 | the number of pods in this component to three or more but this change |
| 205 | will be isolated from the other ONAP components by the service |
| 206 | abstraction. A service can include a load balancer on its ingress to |
| 207 | distribute traffic between the pods and even react to dynamic changes in |
| 208 | the number of pods if they are part of a replica set (see the MSO |
| 209 | example below for a brief explanation of replica sets). |
| 210 | |
| 211 | Persistent Volumes |
| 212 | ~~~~~~~~~~~~~~~~~~ |
| 213 | |
| 214 | As pods and containers are ephemeral, any data that must be persisted |
| 215 | across pod restart events needs to be stored outside of the pod in a |
| 216 | persistent volume(s). Kubernetes supports a wide variety of types of |
| 217 | persistent volumes such as: Fibre Channel, NFS, iSCSI, CephFS, and |
| 218 | GlusterFS (for a full list look |
| 219 | `here <https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes>`__) |
| 220 | so there are many options as to how storage is configured when deploying |
| 221 | ONAP via OOM. |
| 222 | |
| 223 | OOM Networking with Kubernetes |
| 224 | ------------------------------ |
| 225 | |
| 226 | - DNS |
| 227 | |
| 228 | - Ports - Flattening the containers also expose port conflicts between |
| 229 | the containers which need to be resolved. |
| 230 | |
| 231 | Name Spaces |
| 232 | ~~~~~~~~~~~ |
| 233 | |
| 234 | Within the namespaces are kubernete's services that provide external |
| 235 | connectivity to pods that host Docker containers. The following is a |
| 236 | list of the namespaces and the services within: |
| 237 | |
| 238 | - onap-aai |
| 239 | |
| 240 | - aai-service |
| 241 | |
| 242 | - *hbase* |
| 243 | |
| 244 | - model-loader-service |
| 245 | |
| 246 | - aai-resources |
| 247 | |
| 248 | - aai-traversal |
| 249 | |
| 250 | - data-router |
| 251 | |
| 252 | - elasticsearch |
| 253 | |
| 254 | - gremlin |
| 255 | |
| 256 | - search-data-service |
| 257 | |
| 258 | - sparky-be |
| 259 | |
| 260 | - onap-appc |
| 261 | |
| 262 | - appc |
| 263 | |
| 264 | - *appc-dbhost* |
| 265 | |
| 266 | - appc-dgbuilder |
| 267 | |
| 268 | - clamp |
| 269 | |
| 270 | - clamp |
| 271 | |
| 272 | - clamp-mariadb |
| 273 | |
| 274 | |
| 275 | - onap-dcae |
| 276 | |
| 277 | - cdap0 |
| 278 | |
| 279 | - cdap1 |
| 280 | |
| 281 | - cdap2 |
| 282 | |
| 283 | - dcae-collector-common-event |
| 284 | |
| 285 | - dcae-collector-dmaapbc |
| 286 | |
| 287 | - dcae-controller |
| 288 | |
| 289 | - dcae-pgaas |
| 290 | |
| 291 | - dmaap |
| 292 | |
| 293 | - kafka |
| 294 | |
| 295 | - zookeeper |
| 296 | |
| 297 | - onap-message-router |
| 298 | |
| 299 | - dmaap |
| 300 | |
| 301 | - *global-kafka* |
| 302 | |
| 303 | - *zookeeper* |
| 304 | |
| 305 | - onap-mso |
| 306 | |
| 307 | - mso |
| 308 | |
| 309 | - *mariadb* |
| 310 | |
| 311 | - onap-multicloud |
| 312 | |
| 313 | - multicloud-vio |
| 314 | |
| 315 | - framework |
| 316 | |
| 317 | - onap-policy |
| 318 | |
| 319 | - brmsgw |
| 320 | |
| 321 | - drools |
| 322 | |
| 323 | - *mariadb* |
| 324 | |
| 325 | - *nexus* |
| 326 | |
| 327 | - pap |
| 328 | |
| 329 | - pdp |
| 330 | |
| 331 | - onap-portal |
| 332 | |
| 333 | - portalapps |
| 334 | |
| 335 | - *portaldb* |
| 336 | |
| 337 | - portalwidgets |
| 338 | |
| 339 | - vnc-portal |
| 340 | |
| 341 | - onap-robot |
| 342 | |
| 343 | - robot |
| 344 | |
| 345 | - onap-sdc |
| 346 | |
| 347 | - sdc-be |
| 348 | |
| 349 | - *sdc-cs* |
| 350 | |
| 351 | - *sdc-es* |
| 352 | |
| 353 | - sdc-fe |
| 354 | |
| 355 | - *sdc-kb* |
| 356 | |
| 357 | - onap-sdnc |
| 358 | |
| 359 | - sdnc |
| 360 | |
| 361 | - *sdnc-dbhost* |
| 362 | |
| 363 | - sdnc-dgbuilder |
| 364 | |
| 365 | - sdnc-portal |
| 366 | |
| 367 | - onap-vid |
| 368 | |
| 369 | - *vid-mariadb* |
| 370 | |
| 371 | - vid-server |
| 372 | |
| 373 | Note that services listed in \ *italics* are local to the namespace |
| 374 | itself and not accessible from outside of the namespace. |
| 375 | |
| 376 | Kubernetes Deployment Specifications for ONAP |
| 377 | --------------------------------------------- |
| 378 | |
| 379 | Each of the ONAP components are deployed as described in a deployment |
| 380 | specification. This specification documents key parameters and |
| 381 | dependencies between the pods of an ONAP components such that kubernetes |
| 382 | is able to repeatably startup the component. The components artifacts |
| 383 | are stored here in the oom/kubernetes repo in \ `ONAP |
| 384 | gerrit <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes;h=4597d09dbce86d7543174924322435c30cb5b0ee;hb=refs/heads/master>`__. |
| 385 | The mso project is a relatively simple example, so let's start there. |
| 386 | |
| 387 | MSO Example |
| 388 | ~~~~~~~~~~~ |
| 389 | |
| 390 | Within |
| 391 | the \ `oom/kubernetes/templates/mso <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/templates/mso;h=d8b778a16381d6695f635c14b9dcab72fb9fcfcd;hb=refs/heads/master>`__ repo, |
| 392 | one will find four files in yaml format: |
| 393 | |
| 394 | - `all-services.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/mso/templates/all-services.yaml;hb=refs/heads/master>`__ |
| 395 | |
| 396 | - `db-deployment.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/mso/templates/db-deployment.yaml;hb=refs/heads/master>`__ |
| 397 | |
| 398 | - `mso-deployment.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/mso/templates/db-deployment.yaml;hb=refs/heads/master>`__ |
| 399 | |
| 400 | - `mso-pv-pvc.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/mso/templates/mso-pv-pvc.yaml;hb=refs/heads/master>`__ |
| 401 | |
| 402 | The db-deployment.yaml file describes deployment of the database |
| 403 | component of mso. Here is the contents: |
| 404 | |
| 405 | **db-deployment.yaml**:: |
| 406 | |
| 407 | apiVersion: extensions/v1beta1 |
| 408 | kind: Deployment |
| 409 | metadata: |
| 410 | name: mariadb |
| 411 | namespace: "{{ .Values.nsPrefix }}-mso" |
| 412 | spec: |
| 413 | replicas: 1 |
| 414 | selector: |
| 415 | matchLabels: |
| 416 | app: mariadb |
| 417 | template: |
| 418 | metadata: |
| 419 | labels: |
| 420 | app: mariadb |
| 421 | name: mariadb |
| 422 | spec: |
| 423 | hostname: mariadb |
| 424 | containers: |
| 425 | - args: |
| 426 | image: {{ .Values.image.mariadb }} |
| 427 | imagePullPolicy: {{ .Values.pullPolicy }} |
| 428 | name: "mariadb" |
| 429 | env: |
| 430 | - name: MYSQL_ROOT_PASSWORD |
| 431 | value: password |
| 432 | - name: MARIADB_MAJOR |
| 433 | value: "10.1" |
| 434 | - name: MARIADB_VERSION |
| 435 | value: "10.1.11+maria-1~jessie" |
| 436 | volumeMounts: |
| 437 | - mountPath: /etc/localtime |
| 438 | name: localtime |
| 439 | readOnly: true |
| 440 | - mountPath: /etc/mysql/conf.d |
| 441 | name: mso-mariadb-conf |
| 442 | - mountPath: /docker-entrypoint-initdb.d |
| 443 | name: mso-mariadb-docker-entrypoint-initdb |
| 444 | - mountPath: /var/lib/mysql |
| 445 | name: mso-mariadb-data |
| 446 | ports: |
| 447 | - containerPort: 3306 |
| 448 | name: mariadb |
| 449 | readinessProbe: |
| 450 | tcpSocket: |
| 451 | port: 3306 |
| 452 | initialDelaySeconds: 5 |
| 453 | periodSeconds: 10 |
| 454 | volumes: |
| 455 | - name: localtime |
| 456 | hostPath: |
| 457 | path: /etc/localtime |
| 458 | - name: mso-mariadb-conf |
| 459 | hostPath: |
| 460 | path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/mariadb/conf.d |
| 461 | - name: mso-mariadb-docker-entrypoint-initdb |
| 462 | hostPath: |
| 463 | path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/mariadb/docker-entrypoint-initdb.d |
| 464 | - name: mso-mariadb-data |
| 465 | persistentVolumeClaim: |
| 466 | claimName: mso-db |
| 467 | imagePullSecrets: |
| 468 | - name: "{{ .Values.nsPrefix }}-docker-registry-key" |
| 469 | |
| 470 | |
| 471 | The first part of the yaml file simply states that this is a deployment |
| 472 | specification for a mariadb pod. |
| 473 | |
| 474 | The spec section starts off with 'replicas: 1' which states that only 1 |
| 475 | 'replica' will be use here. If one was to change the number of replicas |
| 476 | to 3 for example, kubernetes would attempt to ensure that three replicas |
| 477 | of this pod are operational at all times. One can see that in a |
| 478 | clustered environment the number of replicas should probably be more |
| 479 | than 1 but for simple deployments 1 is sufficient. |
| 480 | |
| 481 | The selector label is a grouping primitive of kubernetes but this simple |
| 482 | example doesn't exercise it's full capabilities. |
| 483 | |
| 484 | The template/spec section is where the key information required to start |
| 485 | this pod is found. |
| 486 | |
| 487 | - image: is a reference to the location of the docker image in nexus3 |
| 488 | |
| 489 | - name: is the name of the docker image |
| 490 | |
| 491 | - env is a section supports the creation of operating system |
| 492 | environment variables within the container and are specified as a set |
| 493 | of key/value pairs. For example, MYSQL\_ROOT\_PASSWORD is set to |
| 494 | "password". |
| 495 | |
| 496 | - volumeMounts: allow for the creation of custom mount points |
| 497 | |
| 498 | - ports: define the networking ports that will be opened on the |
| 499 | container. Note that further in the all-services.yaml file ports |
| 500 | that are defined here can be exposed outside of ONAP component's name |
| 501 | space by creating a 'nodePort' - a mechanism used to resolve port |
| 502 | duplication. |
| 503 | |
| 504 | - readinessProbe: is the mechanism kubernetes uses to determine the |
| 505 | state of the container. |
| 506 | |
| 507 | - volumes: a location to define volumes required by the container, in |
| 508 | this case configuration and initialization information. |
| 509 | |
| 510 | - imagePullSecrets: an key to access the nexus3 repo when pulling |
| 511 | docker containers. |
| 512 | |
| 513 | As one might image, the mso-deployment.yaml file describes the |
| 514 | deployment artifacts of the mso application. Here are the contents: |
| 515 | |
| 516 | **mso-deployment.yaml**:: |
| 517 | |
| 518 | apiVersion: extensions/v1beta1 |
| 519 | kind: Deployment |
| 520 | metadata: |
| 521 | name: mso |
| 522 | namespace: "{{ .Values.nsPrefix }}-mso" |
| 523 | spec: |
| 524 | replicas: 1 |
| 525 | selector: |
| 526 | matchLabels: |
| 527 | app: mso |
| 528 | template: |
| 529 | metadata: |
| 530 | labels: |
| 531 | app: mso |
| 532 | name: mso |
| 533 | annotations: |
| 534 | pod.beta.kubernetes.io/init-containers: '[ |
| 535 | { |
| 536 | "args": [ |
| 537 | "--container-name", |
| 538 | "mariadb" |
| 539 | ], |
| 540 | "command": [ |
| 541 | "/root/ready.py" |
| 542 | ], |
| 543 | "env": [ |
| 544 | { |
| 545 | "name": "NAMESPACE", |
| 546 | "valueFrom": { |
| 547 | "fieldRef": { |
| 548 | "apiVersion": "v1", |
| 549 | "fieldPath": "metadata.namespace" |
| 550 | } |
| 551 | } |
| 552 | } |
| 553 | ], |
| 554 | "image": "{{ .Values.image.readiness }}", |
| 555 | "imagePullPolicy": "{{ .Values.pullPolicy }}", |
| 556 | "name": "mso-readiness" |
| 557 | } |
| 558 | ]' |
| 559 | spec: |
| 560 | containers: |
| 561 | - command: |
| 562 | - /docker-files/scripts/start-jboss-server.sh |
| 563 | image: {{ .Values.image.mso }} |
| 564 | imagePullPolicy: {{ .Values.pullPolicy }} |
| 565 | name: mso |
| 566 | volumeMounts: |
| 567 | - mountPath: /etc/localtime |
| 568 | name: localtime |
| 569 | readOnly: true |
| 570 | - mountPath: /shared |
| 571 | name: mso |
| 572 | - mountPath: /docker-files |
| 573 | name: mso-docker-files |
| 574 | env: |
| 575 | - name: JBOSS_DEBUG |
| 576 | value: "false" |
| 577 | ports: |
| 578 | - containerPort: 3904 |
| 579 | - containerPort: 3905 |
| 580 | - containerPort: 8080 |
| 581 | - containerPort: 9990 |
| 582 | - containerPort: 8787 |
| 583 | readinessProbe: |
| 584 | tcpSocket: |
| 585 | port: 8080 |
| 586 | initialDelaySeconds: 5 |
| 587 | periodSeconds: 10 |
| 588 | volumes: |
| 589 | - name: localtime |
| 590 | hostPath: |
| 591 | path: /etc/localtime |
| 592 | - name: mso |
| 593 | hostPath: |
| 594 | path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/mso |
| 595 | - name: mso-docker-files |
| 596 | hostPath: |
| 597 | path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/docker-files |
| 598 | imagePullSecrets: |
| 599 | - name: "{{ .Values.nsPrefix }}-docker-registry-key" |
| 600 | |
| 601 | Much like the db deployment specification the first and last part of |
| 602 | this yaml file describe meta-data, replicas, images, volumes, etc. The |
| 603 | template section has an important new functionality though, a deployment |
| 604 | specification for a new "initialization" container . The entire purpose |
| 605 | of the init-container is to allow dependencies to be resolved in an |
| 606 | orderly manner such that the entire ONAP system comes up every time. |
| 607 | Once the dependencies are met and the init-containers job is complete, |
| 608 | this container will terminate. Therefore, when OOM starts up ONAP one |
| 609 | is able to see a number of init-containers start and then disappear as |
| 610 | the system stabilizes. Note that more than one init-container may be |
| 611 | specified, each completing before starting the next, if complex startup |
| 612 | relationships need to be specified. |
| 613 | |
| 614 | In this particular init-container, the command '/root/ready.py' will be |
| 615 | executed to determine when mariadb is ready, but this could be a simple |
| 616 | bash script. The image/name section describes where and how to get the |
| 617 | docker image from the init-container. |
| 618 | |
| 619 | To ensure that data isn't lost when an ephemeral container undergoes |
| 620 | life-cycle events (like being restarted), non-volatile or persistent |
| 621 | volumes can be attached to the service. The following pv-pvc.yaml |
| 622 | file defines the persistent volume as 2 GB storage claimed by the |
| 623 | mso namespace. |
| 624 | |
| 625 | **pv-pvc.yaml**:: |
| 626 | |
| 627 | apiVersion: v1 |
| 628 | kind: PersistentVolume |
| 629 | metadata: |
| 630 | name: "{{ .Values.nsPrefix }}-mso-db" |
| 631 | namespace: "{{ .Values.nsPrefix }}-mso" |
| 632 | labels: |
| 633 | name: "{{ .Values.nsPrefix }}-mso-db" |
| 634 | spec: |
| 635 | capacity: |
| 636 | storage: 2Gi |
| 637 | accessModes: |
| 638 | - ReadWriteMany |
| 639 | persistentVolumeReclaimPolicy: Retain |
| 640 | hostPath: |
| 641 | path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/mariadb/data |
| 642 | --- |
| 643 | kind: PersistentVolumeClaim |
| 644 | apiVersion: v1 |
| 645 | metadata: |
| 646 | name: mso-db |
| 647 | namespace: "{{ .Values.nsPrefix }}-mso" |
| 648 | spec: |
| 649 | accessModes: |
| 650 | - ReadWriteMany |
| 651 | resources: |
| 652 | requests: |
| 653 | storage: 2Gi |
| 654 | selector: |
| 655 | matchLabels: |
| 656 | name: "{{ .Values.nsPrefix }}-mso-db" |
| 657 | |
| 658 | The last of the four files is the all-services.yaml file which defines |
| 659 | the kubernetes service(s) that will be exposed in this name space. Here |
| 660 | is the contents of the file: |
| 661 | |
| 662 | **all-services.yaml**:: |
| 663 | |
| 664 | apiVersion: v1 |
| 665 | kind: Service |
| 666 | metadata: |
| 667 | name: mariadb |
| 668 | namespace: "{{ .Values.nsPrefix }}-mso" |
| 669 | labels: |
| 670 | app: mariadb |
| 671 | spec: |
| 672 | ports: |
| 673 | - port: 3306 |
| 674 | nodePort: {{ .Values.nodePortPrefix }}52 |
| 675 | selector: |
| 676 | app: mariadb |
| 677 | type: NodePort |
| 678 | --- |
| 679 | apiVersion: v1 |
| 680 | kind: Service |
| 681 | metadata: |
| 682 | name: mso |
| 683 | namespace: "{{ .Values.nsPrefix }}-mso" |
| 684 | labels: |
| 685 | app: mso |
| 686 | annotations: |
| 687 | msb.onap.org/service-info: '[ |
| 688 | { |
| 689 | "serviceName": "so", |
| 690 | "version": "v1", |
| 691 | "url": "/ecomp/mso/infra", |
| 692 | "protocol": "REST" |
| 693 | "port": "8080", |
| 694 | "visualRange":"1" |
| 695 | }, |
| 696 | { |
| 697 | "serviceName": "so-deprecated", |
| 698 | "version": "v1", |
| 699 | "url": "/ecomp/mso/infra", |
| 700 | "protocol": "REST" |
| 701 | "port": "8080", |
| 702 | "visualRange":"1", |
| 703 | "path":"/ecomp/mso/infra" |
| 704 | } |
| 705 | ]' |
| 706 | spec: |
| 707 | selector: |
| 708 | app: mso |
| 709 | ports: |
| 710 | - name: mso1 |
| 711 | port: 8080 |
| 712 | nodePort: {{ .Values.nodePortPrefix }}23 |
| 713 | - name: mso2 |
| 714 | port: 3904 |
| 715 | nodePort: {{ .Values.nodePortPrefix }}25 |
| 716 | - name: mso3 |
| 717 | port: 3905 |
| 718 | nodePort: {{ .Values.nodePortPrefix }}24 |
| 719 | - name: mso4 |
| 720 | port: 9990 |
| 721 | nodePort: {{ .Values.nodePortPrefix }}22 |
| 722 | - name: mso5 |
| 723 | port: 8787 |
| 724 | nodePort: {{ .Values.nodePortPrefix }}50 |
| 725 | type: NodePort |
| 726 | |
| 727 | First of all, note that this file is really two service specification in |
| 728 | a single file: the mariadb service and the mso service. In some |
| 729 | circumstances it may be possible to hide some of the complexity of the |
| 730 | containers/pods by hiding them behind a single service. |
| 731 | |
| 732 | The mariadb service specification is quite simple; other than the name |
| 733 | the only section of interest is the nodePort specification. When |
| 734 | containers require exposing ports to the world outside of a kubernetes |
| 735 | namespace, there is a potential for port conflict. To resolve this |
| 736 | potential port conflict kubernetes uses the concept of a nodePort that |
| 737 | is mapped one-to-one with a port within the namespace. In this case the |
| 738 | port 3306 (which was defined in the db-deployment.yaml file) is mapped |
| 739 | to 30252 externally thus avoiding the conflict that would have arisen |
| 740 | from deployment multiple mariadb containers. |
| 741 | |
| 742 | The mso service definition is largely the same as the mariadb service |
| 743 | with the exception that the ports are named. |
| 744 | |
| 745 | Customizing Deployment Specifications |
| 746 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 747 | |
| 748 | For each ONAP component deployed by OOM, a set of deployment |
| 749 | specifications are required. Use fortunately there are many examples to |
| 750 | use as references such that the previous |
| 751 | '`mso <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/mso;h=d8b778a16381d6695f635c14b9dcab72fb9fcfcd;hb=refs/heads/master>`__' |
| 752 | example, as well as: |
| 753 | `aai <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/aai;h=243ff90da714459a07fa33023e6655f5d036bfcd;hb=refs/heads/master>`__, |
| 754 | `appc <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/appc;h=d34eaca8a17fc28033a491d3b71aaa1e25673f9e;hb=refs/heads/master>`__, |
| 755 | `message-router <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/message-router;h=51fcb23fb7fbbfab277721483d01c6e3f98ca2cc;hb=refs/heads/master>`__, |
| 756 | `policy <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/policy;h=8c29597b23876ea2ae17dbf747f4ab1e3b955dd9;hb=refs/heads/master>`__, |
| 757 | `portal <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/portal;h=371db03ddef92703daa699014e8c1c9623f7994d;hb=refs/heads/master>`__, |
| 758 | `robot <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/robot;h=46445652d43d93dc599c5108f5c10b303a3c777b;hb=refs/heads/master>`__, |
| 759 | `sdc <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/sdc;h=1d59f7b5944d4604491e72d0b6def0ff3f10ba4d;hb=refs/heads/master>`__, |
| 760 | `sdnc <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/sdnc;h=dbaab2ebd62190edcf489b5a5f1f52992847a73a;hb=refs/heads/master>`__ |
| 761 | and |
| 762 | `vid <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/vid;h=e91788c8504f2da12c086e802e1e7e8648418c66;hb=refs/heads/master>`__. |
| 763 | If your components isn't already deployed by OOM, you can create your |
| 764 | own set of deployment specifications that can be easily added to OOM. |
| 765 | |
| 766 | Development Deployments |
| 767 | ~~~~~~~~~~~~~~~~~~~~~~~ |
| 768 | |
| 769 | For the Amsterdam release, the deployment specifications represent a |
| 770 | simple simplex deployment of ONAP that may not have the robustness |
| 771 | typically required of a full operational deployment. Follow on releases |
| 772 | will enhance these deployment specifications as follows: |
| 773 | |
| 774 | - Load Balancers - kubernets has built in support for user defined or |
| 775 | simple 'ingress' load balances at the service layer to hide the |
| 776 | complexity of multi-pod deployments from other components. |
| 777 | |
| 778 | - Horizontal Scaling - replica sets can be used to dynamically scale |
| 779 | the number of pods behind a service to that of the offered load. |
| 780 | |
| 781 | - Stateless Pods - using concepts such as DBaaS (database as a service) |
| 782 | database technologies could be removed (where appropriate) from the |
| 783 | services thus moving to the 'cattle' model so common in cloud |
| 784 | deployments. |
| 785 | |
| 786 | Kubernetes Under-Cloud Deployments |
| 787 | ================================== |
| 788 | |
| 789 | The automated ONAP deployment depends on a fully functional kubernetes |
| 790 | environment being available prior to ONAP installation. Fortunately, |
| 791 | kubenetes is supported on a wide variety of systems such as Google |
| 792 | Compute Engine, `AWS |
| 793 | EC2 <https://wiki.onap.org/display/DW/ONAP+on+AWS#ONAPonAWS-Option0:DeployOOMKubernetestoaspotVM>`__, |
| 794 | Microsoft Azure, CenturyLink Cloud, IBM Bluemix and more. If you're |
| 795 | setting up your own kubernetes environment, please refer to \ `ONAP on |
| 796 | Kubernetes <file:///C:\display\DW\ONAP+on+Kubernetes>`__ for a walk |
| 797 | through of how to set this environment up on several platforms. |
| 798 | |
| 799 | ONAP 'OneClick' Deployment Walk-though |
| 800 | ====================================== |
| 801 | |
| 802 | Once a kubernetes environment is available and the deployment artifacts |
| 803 | have been customized for your location, ONAP is ready to be installed. |
| 804 | |
| 805 | The first step is to setup |
| 806 | the \ `/oom/kubernetes/config/onap-parameters.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/config/onap-parameters.yaml;h=7ddaf4d4c3dccf2fad515265f0da9c31ec0e64b1;hb=refs/heads/master>`__ file |
| 807 | with key-value pairs specific to your OpenStack environment. There is |
| 808 | a \ `sample <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/config/onap-parameters-sample.yaml;h=3a74beddbbf7f9f9ec8e5a6abaecb7cb238bd519;hb=refs/heads/master>`__\ that |
| 809 | may help you out or even be usable directly if you don't intend to |
| 810 | actually use OpenStack resources. Here is the contents of this file: |
| 811 | |
| 812 | **onap-parameters-sample.yaml** |
| 813 | |
| 814 | .. literalinclude:: https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/config/onap-parameters-sample.yaml;hb=refs/heads/master |
| 815 | |
| 816 | OPENSTACK\_UBUNTU\_14\_IMAGE: "Ubuntu\_14.04.5\_LTS" |
| 817 | |
| 818 | OPENSTACK\_PUBLIC\_NET\_ID: "e8f51956-00dd-4425-af36-045716781ffc" |
| 819 | |
| 820 | OPENSTACK\_OAM\_NETWORK\_ID: "d4769dfb-c9e4-4f72-b3d6-1d18f4ac4ee6" |
| 821 | |
| 822 | OPENSTACK\_OAM\_SUBNET\_ID: "191f7580-acf6-4c2b-8ec0-ba7d99b3bc4e" |
| 823 | |
| 824 | OPENSTACK\_OAM\_NETWORK\_CIDR: "192.168.30.0/24" |
| 825 | |
| 826 | OPENSTACK\_USERNAME: "vnf\_user" |
| 827 | |
| 828 | OPENSTACK\_API\_KEY: "vnf\_password" |
| 829 | |
| 830 | OPENSTACK\_TENANT\_NAME: "vnfs" |
| 831 | |
| 832 | OPENSTACK\_REGION: "RegionOne" |
| 833 | |
| 834 | OPENSTACK\_KEYSTONE\_URL: "http://1.2.3.4:5000" |
| 835 | |
| 836 | OPENSTACK\_FLAVOUR\_MEDIUM: "m1.medium" |
| 837 | |
| 838 | OPENSTACK\_SERVICE\_TENANT\_NAME: "services" |
| 839 | |
| 840 | DMAAP\_TOPIC: "AUTO" |
| 841 | |
| 842 | DEMO\_ARTIFACTS\_VERSION: "1.1.0-SNAPSHOT" |
| 843 | |
| 844 | Note that these values are required or the following steps will fail. |
| 845 | |
| 846 | In-order to be able to support multiple ONAP instances within a single |
| 847 | kubernetes environment a configuration set is required. The |
| 848 | `createConfig.sh <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/config/createConfig.sh;h=f226ccae47ca6de15c1da49be4b8b6de974895ed;hb=refs/heads/master>`__ |
| 849 | script is used to do this. |
| 850 | |
| 851 | **createConfig.sh**:: |
| 852 | |
| 853 | > ./createConfig.sh -n onapTrial |
| 854 | |
| 855 | The bash |
| 856 | script \ `createAll.bash <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/oneclick/createAll.bash;h=5e5f2dc76ea7739452e757282e750638b4e3e1de;hb=refs/heads/master>`__ is |
| 857 | used to create an ONAP deployment with kubernetes. It has two primary |
| 858 | functions: |
| 859 | |
| 860 | - Creating the namespaces used to encapsulate the ONAP components, and |
| 861 | |
| 862 | - Creating the services, pods and containers within each of these |
| 863 | namespaces that provide the core functionality of ONAP. |
| 864 | |
| 865 | **createAll.bash**:: |
| 866 | |
| 867 | > ./createAll.bash -n onapTrial |
| 868 | |
| 869 | Namespaces provide isolation between ONAP components as ONAP release 1.0 |
| 870 | contains duplicate application (e.g. mariadb) and port usage. As |
| 871 | such createAll.bash requires the user to enter a namespace prefix string |
| 872 | that can be used to separate multiple deployments of onap. The result |
| 873 | will be set of 10 namespaces (e.g. onapTrial-sdc, onapTrial-aai, |
| 874 | onapTrial-mso, onapTrial-message-router, onapTrial-robot, onapTrial-vid, |
| 875 | onapTrial-sdnc, onapTrial-portal, onapTrial-policy, onapTrial-appc) |
| 876 | being created within the kubernetes environment. A prerequisite pod |
| 877 | config-init (\ `pod-config-init.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/config/pod-config-init.yaml;h=b1285ce21d61815c082f6d6aa3c43d00561811c7;hb=refs/heads/master>`__) |
| 878 | may editing to match you environment and deployment into the default |
| 879 | namespace before running createAll.bash. |
| 880 | |
| 881 | Integration with MSB |
| 882 | ==================== |
| 883 | |
| 884 | The \ `Microservices Bus |
HuabingZhao | 329c139 | 2017-10-30 08:59:48 +0800 | [diff] [blame] | 885 | Project <https://wiki.onap.org/pages/viewpage.action?pageId=3246982>`__ provides |
rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 886 | facilities to integrate micro-services into ONAP and therefore needs to |
| 887 | integrate into OOM - primarily through Consul which is the backend of |
| 888 | MSB service discovery. The following is a brief description of how this |
HuabingZhao | 329c139 | 2017-10-30 08:59:48 +0800 | [diff] [blame] | 889 | integration will be done: |
rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 890 | |
| 891 | A registrator to push the service endpoint info to MSB service |
| 892 | discovery. |
| 893 | |
| 894 | - The needed service endpoint info is put into the kubernetes yaml file |
| 895 | as annotation, including service name, Protocol,version, visual |
| 896 | range,LB method, IP, Port,etc. |
| 897 | |
| 898 | - OOM deploy/start/restart/scale in/scale out/upgrade ONAP components |
| 899 | |
| 900 | - Registrator watch the kubernetes event |
| 901 | |
| 902 | - When an ONAP component instance has been started/destroyed by OOM, |
| 903 | Registrator get the notification from kubernetes |
| 904 | |
| 905 | - Registrator parse the service endpoint info from annotation and |
| 906 | register/update/unregister it to MSB service discovery |
| 907 | |
| 908 | - MSB API Gateway uses the service endpoint info for service routing |
| 909 | and load balancing. |
| 910 | |
| 911 | Details of the registration service API can be found at \ `Microservice |
| 912 | Bus API |
HuabingZhao | 329c139 | 2017-10-30 08:59:48 +0800 | [diff] [blame] | 913 | Documentation <https://wiki.onap.org/display/DW/Microservice+Bus+API+Documentation>`__. |
rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 914 | |
| 915 | How to define the service endpoints using annotation \ `ONAP Services |
HuabingZhao | 329c139 | 2017-10-30 08:59:48 +0800 | [diff] [blame] | 916 | List#OOMIntegration <https://wiki.onap.org/display/DW/ONAP+Services+List#ONAPServicesList-OOMIntegration>`__ |
rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 917 | |
| 918 | A preliminary view of the OOM-MSB integration is as follows: |
| 919 | |
| 920 | .. figure:: ../MSB-OOM-Diagram.png |
| 921 | |
| 922 | A message sequence chart of the registration process: |
| 923 | |
| 924 | .. figure:: ../MSB-OOM-MSC.png |
| 925 | |
| 926 | MSB Usage Instructions |
| 927 | ---------------------- |
David Minodier | 6d3ea25 | 2018-01-24 11:57:24 +0100 | [diff] [blame] | 928 | MSB provides kubernetes charts in OOM, so it can be spun up by oom oneclick command. |
rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 929 | |
HuabingZhao | 329c139 | 2017-10-30 08:59:48 +0800 | [diff] [blame] | 930 | Please note that kubernetes authentication token must be set at *kubernetes/kube2msb/values.yaml* so the kube2msb registrator can get the access to watch the kubernetes events and get service annotation by kubernetes APIs. The token can be found in the kubectl configuration file *~/.kube/config* |
rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 931 | |
HuabingZhao | 329c139 | 2017-10-30 08:59:48 +0800 | [diff] [blame] | 932 | MSB and kube2msb can be spun up with all the ONAP components together, or separately using the following commands. |
rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 933 | |
HuabingZhao | 329c139 | 2017-10-30 08:59:48 +0800 | [diff] [blame] | 934 | **Start MSB services**:: |
rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 935 | |
HuabingZhao | 329c139 | 2017-10-30 08:59:48 +0800 | [diff] [blame] | 936 | createAll.bash -n onap -a msb |
rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 937 | |
HuabingZhao | 329c139 | 2017-10-30 08:59:48 +0800 | [diff] [blame] | 938 | **Start kube2msb registrator**:: |
rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 939 | |
HuabingZhao | 329c139 | 2017-10-30 08:59:48 +0800 | [diff] [blame] | 940 | createAll.bash -n onap -a kube2msb |
rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 941 | |
HuabingZhao | 329c139 | 2017-10-30 08:59:48 +0800 | [diff] [blame] | 942 | More details can be found here `MSB installation <http://onap.readthedocs.io/en/latest/submodules/msb/apigateway.git/docs/platform/installation.html>`__. |
rvyas | ea507b5 | 2017-09-25 14:29:47 -0400 | [diff] [blame] | 943 | |
| 944 | FAQ (Frequently Asked Questions) |
| 945 | ================================ |
| 946 | |
| 947 | Does OOM enable the deployment of VNFs on containers? |
| 948 | |
| 949 | - No. OOM provides a mechanism to instantiate and manage the ONAP |
| 950 | components themselves with containers but does not provide a |
| 951 | Multi-VIM capability such that VNFs can be deployed into containers. |
| 952 | The Multi VIM/Cloud Project may provide this functionality at some point. |
| 953 | |
| 954 | Configuration Parameters |
| 955 | ======================== |
| 956 | |
| 957 | Configuration parameters that are specific to the ONAP deployment, for example |
| 958 | hard coded IP addresses, are parameterized and stored in a OOM specific |
| 959 | set of configuration files. |
| 960 | |
| 961 | More information about ONAP configuration can be found in the Configuration Management |
| 962 | section. |
| 963 | |
| 964 | References |
| 965 | ========== |
| 966 | |
| 967 | - Docker - http://docker.com |
| 968 | |
| 969 | - Kubernetes - http://kubernetes.io |
| 970 | |
| 971 | - Helm - https://helm.sh |