blob: ee3b2f143035d9a728da1e804926d808a2785bd0 [file] [log] [blame]
rvyasea507b52017-09-25 14:29:47 -04001.. This work is licensed under a Creative Commons Attribution 4.0 International License.
2
3OOM User Guide
4##############
5.. contents::
6 :depth: 3
7..
8
9Introduction
10============
11
12The ONAP Operations Manager (OOM) is responsible for life-cycle
13management of the ONAP platform itself; components such as MSO, SDNC,
14etc. It is not responsible for the management of services, VNFs or
15infrastructure instantiated by ONAP or used by ONAP to host such
16services or VNFs. OOM uses the open-source Kubernetes container
17management system as a means to manage the Docker containers that
18compose ONAP where the containers are hosted either directly on
19bare-metal servers or on VMs hosted by a 3rd party management system.
20OOM ensures that ONAP is easily deployable and maintainable throughout
21its life cycle while using hardware resources efficiently. There are two
22deployment options for OOM:
23
24- A minimal deployment where single instances of the ONAP components
25 are instantiated with no resource reservations, and
26
27- | A production deployment where ONAP components are deployed with
28 redundancy and anti-affinity rules such that single faults do not
29 interrupt ONAP operation.
30 | When deployed as containers directly on bare-metal, the minimal
31 deployment option requires a single host (32GB memory with 12
32 vCPUs) however further optimization should allow this deployment to
33 target a laptop computer. Production deployments will require more
34 resources as determined by anti-affinity and geo-redundancy
35 requirements.
36
37OOM deployments of ONAP provide many benefits:
38
39- Life-cycle Management Kubernetes is a comprehensive system for
40 managing the life-cycle of containerized applications. Its use as a
41 platform manager will ease the deployment of ONAP, provide fault
42 tolerance and horizontal scalability, and enable seamless upgrades.
43
44- Hardware Efficiency ONAP can be deployed on a single host using less
45 than 32GB of memory. As opposed to VMs that require a guest operating
46 system be deployed along with the application, containers provide
47 similar application encapsulation with neither the computing, memory
48 and storage overhead nor the associated long term support costs of
49 those guest operating systems. An informal goal of the project is to
50 be able to create a development deployment of ONAP that can be hosted
51 on a laptop.
52
53- Rapid Deployment With locally cached images ONAP can be deployed from
54 scratch in 7 minutes. Eliminating the guest operating system results
55 in containers coming into service much faster than a VM equivalent.
56 This advantage can be particularly useful for ONAP where rapid
57 reaction to inevitable failures will be critical in production
58 environments.
59
60- Portability OOM takes advantage of Kubernetes' ability to be hosted
61 on multiple hosted cloud solutions like Google Compute Engine, AWS
62 EC2, Microsoft Azure, CenturyLink Cloud, IBM Bluemix and more.
63
64- Minimal Impact As ONAP is already deployed with Docker containers
65 minimal changes are required to the components themselves when
66 deployed with OOM.
67
68Features of OOM:
69
70- Platform Deployment Automated deployment/un-deployment of ONAP
71 instance(s) / Automated deployment/un-deployment of individual
72 platform components using docker containers & kubernetes
73
74- Platform Monitoring & healing Monitor platform state, Platform health
75 checks, fault tolerance and self-healing using docker containers &
76 kubernetes
77
78- Platform Scaling Platform horizontal scalability through using docker
79 containers & kubernetes
80
81- Platform Upgrades Platform upgrades using docker containers &
82 kubernetes
83
84- Platform Configurations Manage overall platform components
85 configurations using docker containers & kubernetes
86
87- | Platform migrations Manage migration of platform components using
88 docker containers & kubernetes
89 | Please note that the ONAP Operations Manager does not provide
90 support for containerization of services or VNFs that are managed
91 by ONAP; the OOM orchestrates the life-cycle of the ONAP platform
92 components themselves.
93
94Container Background
95--------------------
96
97Linux containers allow for an application and all of its operating
98system dependencies to be packaged and deployed as a single unit without
99including a guest operating system as done with virtual machines. The
100most popular container solution
101is \ `Docker <https://www.docker.com/>`__ which provides tools for
102container management like the Docker Host (dockerd) which can create,
103run, stop, move, or delete a container. Docker has a very popular
104registry of containers images that can be used by any Docker system;
105however, in the ONAP context, Docker images are built by the standard
106CI/CD flow and stored
107in \ `Nexus <https://nexus.onap.org/#welcome>`__ repositories. OOM uses
108the "standard" ONAP docker containers and three new ones specifically
109created for OOM.
110
111Containers are isolated from each other primarily via name spaces within
112the Linux kernel without the need for multiple guest operating systems.
113As such, multiple containers can be deployed with little overhead such
114as all of ONAP can be deployed on a single host. With some optimization
115of the ONAP components (e.g. elimination of redundant database
116instances) it may be possible to deploy ONAP on a single laptop
117computer.
118
119Life Cycle Management via Kubernetes
120====================================
121
122As with the VNFs deployed by ONAP, the components of ONAP have their own
123life-cycle where the components are created, run, healed, scaled,
124stopped and deleted. These life-cycle operations are managed by
125the \ `Kubernetes <https://kubernetes.io/>`__ container management
126system which maintains the desired state of the container system as
127described by one or more deployment descriptors - similar in concept to
128OpenStack HEAT Orchestration Templates. The following sections describe
129the fundamental objects managed by Kubernetes, the network these
130components use to communicate with each other and other entities outside
131of ONAP and the templates that describe the configuration and desired
132state of the ONAP components.
133
134ONAP Components to Kubernetes Object Relationships
135--------------------------------------------------
136
137Kubernetes deployments consist of multiple objects:
138
139- nodes - a worker machine - either physical or virtual - that hosts
140 multiple containers managed by kubernetes.
141
142- services - an abstraction of a logical set of pods that provide a
143 micro-service.
144
145- pods - one or more (but typically one) container(s) that provide
146 specific application functionality. 
147
148- persistent volumes - One or more permanent volumes need to be
149 established to hold non-ephemeral configuration and state data.
150
151The relationship between these objects is shown in the following figure:
152
153.. figure:: ../kubernetes_objects.png
154
155OOM uses these kubernetes objects as described in the following
156sections.
157
158Nodes
159~~~~~
160
161OOM works with both physical and virtual worker machines.  
162
163- Virtual Machine Deployments - If ONAP is to be deployed onto a set of
164 virtual machines, the creation of the VMs is outside of the scope of
165 OOM and could be done in many ways, such as:
166
167 - manually, for example by a user using the OpenStack Horizon
168 dashboard or `AWS
169 EC2 <https://wiki.onap.org/display/DW/ONAP+on+AWS#ONAPonAWS-Option0:DeployOOMKubernetestoaspotVM>`__,
170 or
171
172 - automatically, for example with the use of a OpenStack Heat
173 Orchestration Template which builds an ONAP stack, or
174
175 - orchestrated, for example with Cloudify creating the VMs from a
176 TOSCA template and controlling their life cycle for the life of
177 the ONAP deployment.
178
179- Physical Machine Deployments - If ONAP is to be deployed onto
180 physical machines there are several options but the recommendation is
181 to use
182 `Rancher <http://rancher.com/docs/rancher/v1.6/en/quick-start-guide/>`__
183 along with `Helm <https://github.com/kubernetes/helm/releases>`__ to
184 associate hosts with a kubernetes cluster.
185
186Pods
187~~~~
188
189A group of containers with shared storage and networking can be grouped
190together into a kubernetes pod.  All of the containers within a pod are
191co-located and co-scheduled so they operate as a single unit.  Within
192ONAP Amsterdam release, pods are mapped one-to-one to docker containers
193although this may change in the future.  As explained in the Services
194section below the use of Pods within each ONAP component is abstracted
195from other ONAP components.
196
197Services
198~~~~~~~~
199
200OOM uses the kubernetes service abstraction to provide a consistent
201access point for each of the ONAP components independent of the pod or
202container architecture of that component.  For example, the SDNC
203component may introduce OpenDaylight clustering as some point and change
204the number of pods in this component to three or more but this change
205will be isolated from the other ONAP components by the service
206abstraction.  A service can include a load balancer on its ingress to
207distribute traffic between the pods and even react to dynamic changes in
208the number of pods if they are part of a replica set (see the MSO
209example below for a brief explanation of replica sets).
210
211Persistent Volumes
212~~~~~~~~~~~~~~~~~~
213
214As pods and containers are ephemeral, any data that must be persisted
215across pod restart events needs to be stored outside of the pod in a
216persistent volume(s).  Kubernetes supports a wide variety of types of
217persistent volumes such as: Fibre Channel, NFS, iSCSI, CephFS, and
218GlusterFS (for a full list look
219`here <https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes>`__)
220so there are many options as to how storage is configured when deploying
221ONAP via OOM.
222
223OOM Networking with Kubernetes
224------------------------------
225
226- DNS
227
228- Ports - Flattening the containers also expose port conflicts between
229 the containers which need to be resolved.
230
231Name Spaces
232~~~~~~~~~~~
233
234Within the namespaces are kubernete's services that provide external
235connectivity to pods that host Docker containers. The following is a
236list of the namespaces and the services within:
237
238- onap-aai
239
240 - aai-service
241
242 - *hbase*
243
244 - model-loader-service
245
246 - aai-resources
247
248 - aai-traversal
249
250 - data-router
251
252 - elasticsearch
253
254 - gremlin
255
256 - search-data-service
257
258 - sparky-be
259
260- onap-appc
261
262 - appc
263
264 - *appc-dbhost*
265
266 - appc-dgbuilder
267
268- clamp
269
270 - clamp
271
272 - clamp-mariadb
273
274
275- onap-dcae
276
277 - cdap0
278
279 - cdap1
280
281 - cdap2
282
283 - dcae-collector-common-event
284
285 - dcae-collector-dmaapbc
286
287 - dcae-controller
288
289 - dcae-pgaas
290
291 - dmaap
292
293 - kafka
294
295 - zookeeper
296
297- onap-message-router
298
299 - dmaap
300
301 - *global-kafka*
302
303 - *zookeeper*
304
305- onap-mso
306
307 - mso
308
309 - *mariadb*
310
311- onap-multicloud
312
313 - multicloud-vio
314
315 - framework
316
317- onap-policy
318
319 - brmsgw
320
321 - drools
322
323 - *mariadb*
324
325 - *nexus*
326
327 - pap
328
329 - pdp
330
331- onap-portal
332
333 - portalapps
334
335 - *portaldb*
336
337 - portalwidgets
338
339 - vnc-portal
340
341- onap-robot
342
343 - robot
344
345- onap-sdc
346
347 - sdc-be
348
349 - *sdc-cs*
350
351 - *sdc-es*
352
353 - sdc-fe
354
355 - *sdc-kb*
356
357- onap-sdnc
358
359 - sdnc
360
361 - *sdnc-dbhost*
362
363 - sdnc-dgbuilder
364
365 - sdnc-portal
366
367- onap-vid
368
369 - *vid-mariadb*
370
371 - vid-server
372
373Note that services listed in \ *italics* are local to the namespace
374itself and not accessible from outside of the namespace.
375
376Kubernetes Deployment Specifications for ONAP
377---------------------------------------------
378
379Each of the ONAP components are deployed as described in a deployment
380specification.  This specification documents key parameters and
381dependencies between the pods of an ONAP components such that kubernetes
382is able to repeatably startup the component.  The components artifacts
383are stored here in the oom/kubernetes repo in \ `ONAP
384gerrit <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes;h=4597d09dbce86d7543174924322435c30cb5b0ee;hb=refs/heads/master>`__.
385The mso project is a relatively simple example, so let's start there.
386
387MSO Example
388~~~~~~~~~~~
389
390Within
391the \ `oom/kubernetes/templates/mso <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/templates/mso;h=d8b778a16381d6695f635c14b9dcab72fb9fcfcd;hb=refs/heads/master>`__ repo,
392one will find four files in yaml format:
393
394- `all-services.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/mso/templates/all-services.yaml;hb=refs/heads/master>`__
395
396- `db-deployment.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/mso/templates/db-deployment.yaml;hb=refs/heads/master>`__
397
398- `mso-deployment.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/mso/templates/db-deployment.yaml;hb=refs/heads/master>`__
399
400- `mso-pv-pvc.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/mso/templates/mso-pv-pvc.yaml;hb=refs/heads/master>`__
401
402The db-deployment.yaml file describes deployment of the database
403component of mso.  Here is the contents:
404
405**db-deployment.yaml**::
406
407 apiVersion: extensions/v1beta1
408 kind: Deployment
409 metadata:
410 name: mariadb
411 namespace: "{{ .Values.nsPrefix }}-mso"
412 spec:
413 replicas: 1
414 selector:
415 matchLabels:
416 app: mariadb
417 template:
418 metadata:
419 labels:
420 app: mariadb
421 name: mariadb
422 spec:
423 hostname: mariadb
424 containers:
425 - args:
426 image: {{ .Values.image.mariadb }}
427 imagePullPolicy: {{ .Values.pullPolicy }}
428 name: "mariadb"
429 env:
430 - name: MYSQL_ROOT_PASSWORD
431 value: password
432 - name: MARIADB_MAJOR
433 value: "10.1"
434 - name: MARIADB_VERSION
435 value: "10.1.11+maria-1~jessie"
436 volumeMounts:
437 - mountPath: /etc/localtime
438 name: localtime
439 readOnly: true
440 - mountPath: /etc/mysql/conf.d
441 name: mso-mariadb-conf
442 - mountPath: /docker-entrypoint-initdb.d
443 name: mso-mariadb-docker-entrypoint-initdb
444 - mountPath: /var/lib/mysql
445 name: mso-mariadb-data
446 ports:
447 - containerPort: 3306
448 name: mariadb
449 readinessProbe:
450 tcpSocket:
451 port: 3306
452 initialDelaySeconds: 5
453 periodSeconds: 10
454 volumes:
455 - name: localtime
456 hostPath:
457 path: /etc/localtime
458 - name: mso-mariadb-conf
459 hostPath:
460 path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/mariadb/conf.d
461 - name: mso-mariadb-docker-entrypoint-initdb
462 hostPath:
463 path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/mariadb/docker-entrypoint-initdb.d
464 - name: mso-mariadb-data
465 persistentVolumeClaim:
466 claimName: mso-db
467 imagePullSecrets:
468 - name: "{{ .Values.nsPrefix }}-docker-registry-key"
469
470
471The first part of the yaml file simply states that this is a deployment
472specification for a mariadb pod.
473
474The spec section starts off with 'replicas: 1' which states that only 1
475'replica' will be use here.  If one was to change the number of replicas
476to 3 for example, kubernetes would attempt to ensure that three replicas
477of this pod are operational at all times.  One can see that in a
478clustered environment the number of replicas should probably be more
479than 1 but for simple deployments 1 is sufficient.
480
481The selector label is a grouping primitive of kubernetes but this simple
482example doesn't exercise it's full capabilities.
483
484The template/spec section is where the key information required to start
485this pod is found.
486
487- image: is a reference to the location of the docker image in nexus3
488
489- name: is the name of the docker image
490
491- env is a section supports the creation of operating system
492 environment variables within the container and are specified as a set
493 of key/value pairs.  For example, MYSQL\_ROOT\_PASSWORD is set to
494 "password".
495
496- volumeMounts: allow for the creation of custom mount points
497
498- ports: define the networking ports that will be opened on the
499 container.  Note that further in the all-services.yaml file ports
500 that are defined here can be exposed outside of ONAP component's name
501 space by creating a 'nodePort' - a mechanism used to resolve port
502 duplication.
503
504- readinessProbe: is the mechanism kubernetes uses to determine the
505 state of the container. 
506
507- volumes: a location to define volumes required by the container, in
508 this case configuration and initialization information.
509
510- imagePullSecrets: an key to access the nexus3 repo when pulling
511 docker containers.
512
513As one might image, the mso-deployment.yaml file describes the
514deployment artifacts of the mso application.  Here are the contents:
515
516**mso-deployment.yaml**::
517
518 apiVersion: extensions/v1beta1
519 kind: Deployment
520 metadata:
521 name: mso
522 namespace: "{{ .Values.nsPrefix }}-mso"
523 spec:
524 replicas: 1
525 selector:
526 matchLabels:
527 app: mso
528 template:
529 metadata:
530 labels:
531 app: mso
532 name: mso
533 annotations:
534 pod.beta.kubernetes.io/init-containers: '[
535 {
536 "args": [
537 "--container-name",
538 "mariadb"
539 ],
540 "command": [
541 "/root/ready.py"
542 ],
543 "env": [
544 {
545 "name": "NAMESPACE",
546 "valueFrom": {
547 "fieldRef": {
548 "apiVersion": "v1",
549 "fieldPath": "metadata.namespace"
550 }
551 }
552 }
553 ],
554 "image": "{{ .Values.image.readiness }}",
555 "imagePullPolicy": "{{ .Values.pullPolicy }}",
556 "name": "mso-readiness"
557 }
558 ]'
559 spec:
560 containers:
561 - command:
562 - /docker-files/scripts/start-jboss-server.sh
563 image: {{ .Values.image.mso }}
564 imagePullPolicy: {{ .Values.pullPolicy }}
565 name: mso
566 volumeMounts:
567 - mountPath: /etc/localtime
568 name: localtime
569 readOnly: true
570 - mountPath: /shared
571 name: mso
572 - mountPath: /docker-files
573 name: mso-docker-files
574 env:
575 - name: JBOSS_DEBUG
576 value: "false"
577 ports:
578 - containerPort: 3904
579 - containerPort: 3905
580 - containerPort: 8080
581 - containerPort: 9990
582 - containerPort: 8787
583 readinessProbe:
584 tcpSocket:
585 port: 8080
586 initialDelaySeconds: 5
587 periodSeconds: 10
588 volumes:
589 - name: localtime
590 hostPath:
591 path: /etc/localtime
592 - name: mso
593 hostPath:
594 path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/mso
595 - name: mso-docker-files
596 hostPath:
597 path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/docker-files
598 imagePullSecrets:
599 - name: "{{ .Values.nsPrefix }}-docker-registry-key"
600
601Much like the db deployment specification the first and last part of
602this yaml file describe meta-data, replicas, images, volumes, etc.  The
603template section has an important new functionality though, a deployment
604specification for a new "initialization" container .  The entire purpose
605of the init-container is to allow dependencies to be resolved in an
606orderly manner such that the entire ONAP system comes up every time.
607Once the dependencies are met and the init-containers job is complete,
608this container will terminate.  Therefore, when OOM starts up ONAP one
609is able to see a number of init-containers start and then disappear as
610the system stabilizes. Note that more than one init-container may be
611specified, each completing before starting the next, if complex startup
612relationships need to be specified.
613
614In this particular init-container, the command '/root/ready.py' will be
615executed to determine when mariadb is ready, but this could be a simple
616bash script. The image/name section describes where and how to get the
617docker image from the init-container.
618
619To ensure that data isn't lost when an ephemeral container undergoes
620life-cycle events (like being restarted), non-volatile or persistent
621volumes can be attached to the service. The following pv-pvc.yaml
622file defines the persistent volume as 2 GB storage claimed by the
623mso namespace.
624
625**pv-pvc.yaml**::
626
627 apiVersion: v1
628 kind: PersistentVolume
629 metadata:
630 name: "{{ .Values.nsPrefix }}-mso-db"
631 namespace: "{{ .Values.nsPrefix }}-mso"
632 labels:
633 name: "{{ .Values.nsPrefix }}-mso-db"
634 spec:
635 capacity:
636 storage: 2Gi
637 accessModes:
638 - ReadWriteMany
639 persistentVolumeReclaimPolicy: Retain
640 hostPath:
641 path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/mariadb/data
642 ---
643 kind: PersistentVolumeClaim
644 apiVersion: v1
645 metadata:
646 name: mso-db
647 namespace: "{{ .Values.nsPrefix }}-mso"
648 spec:
649 accessModes:
650 - ReadWriteMany
651 resources:
652 requests:
653 storage: 2Gi
654 selector:
655 matchLabels:
656 name: "{{ .Values.nsPrefix }}-mso-db"
657
658The last of the four files is the all-services.yaml file which defines
659the kubernetes service(s) that will be exposed in this name space. Here
660is the contents of the file:
661
662**all-services.yaml**::
663
664 apiVersion: v1
665 kind: Service
666 metadata:
667 name: mariadb
668 namespace: "{{ .Values.nsPrefix }}-mso"
669 labels:
670 app: mariadb
671 spec:
672 ports:
673 - port: 3306
674 nodePort: {{ .Values.nodePortPrefix }}52
675 selector:
676 app: mariadb
677 type: NodePort
678 ---
679 apiVersion: v1
680 kind: Service
681 metadata:
682 name: mso
683 namespace: "{{ .Values.nsPrefix }}-mso"
684 labels:
685 app: mso
686 annotations:
687 msb.onap.org/service-info: '[
688 {
689 "serviceName": "so",
690 "version": "v1",
691 "url": "/ecomp/mso/infra",
692 "protocol": "REST"
693 "port": "8080",
694 "visualRange":"1"
695 },
696 {
697 "serviceName": "so-deprecated",
698 "version": "v1",
699 "url": "/ecomp/mso/infra",
700 "protocol": "REST"
701 "port": "8080",
702 "visualRange":"1",
703 "path":"/ecomp/mso/infra"
704 }
705 ]'
706 spec:
707 selector:
708 app: mso
709 ports:
710 - name: mso1
711 port: 8080
712 nodePort: {{ .Values.nodePortPrefix }}23
713 - name: mso2
714 port: 3904
715 nodePort: {{ .Values.nodePortPrefix }}25
716 - name: mso3
717 port: 3905
718 nodePort: {{ .Values.nodePortPrefix }}24
719 - name: mso4
720 port: 9990
721 nodePort: {{ .Values.nodePortPrefix }}22
722 - name: mso5
723 port: 8787
724 nodePort: {{ .Values.nodePortPrefix }}50
725 type: NodePort
726
727First of all, note that this file is really two service specification in
728a single file: the mariadb service and the mso service.  In some
729circumstances it may be possible to hide some of the complexity of the
730containers/pods by hiding them behind a single service.
731
732The mariadb service specification is quite simple; other than the name
733the only section of interest is the nodePort specification.  When
734containers require exposing ports to the world outside of a kubernetes
735namespace, there is a potential for port conflict. To resolve this
736potential port conflict kubernetes uses the concept of a nodePort that
737is mapped one-to-one with a port within the namespace.  In this case the
738port 3306 (which was defined in the db-deployment.yaml file) is mapped
739to 30252 externally thus avoiding the conflict that would have arisen
740from deployment multiple mariadb containers.
741
742The mso service definition is largely the same as the mariadb service
743with the exception that the ports are named.
744
745Customizing Deployment Specifications
746~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
747
748For each ONAP component deployed by OOM, a set of deployment
749specifications are required.  Use fortunately there are many examples to
750use as references such that the previous
751'`mso <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/mso;h=d8b778a16381d6695f635c14b9dcab72fb9fcfcd;hb=refs/heads/master>`__'
752example, as well as:
753`aai <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/aai;h=243ff90da714459a07fa33023e6655f5d036bfcd;hb=refs/heads/master>`__,
754`appc <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/appc;h=d34eaca8a17fc28033a491d3b71aaa1e25673f9e;hb=refs/heads/master>`__,
755`message-router <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/message-router;h=51fcb23fb7fbbfab277721483d01c6e3f98ca2cc;hb=refs/heads/master>`__,
756`policy <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/policy;h=8c29597b23876ea2ae17dbf747f4ab1e3b955dd9;hb=refs/heads/master>`__,
757`portal <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/portal;h=371db03ddef92703daa699014e8c1c9623f7994d;hb=refs/heads/master>`__,
758`robot <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/robot;h=46445652d43d93dc599c5108f5c10b303a3c777b;hb=refs/heads/master>`__,
759`sdc <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/sdc;h=1d59f7b5944d4604491e72d0b6def0ff3f10ba4d;hb=refs/heads/master>`__,
760`sdnc <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/sdnc;h=dbaab2ebd62190edcf489b5a5f1f52992847a73a;hb=refs/heads/master>`__
761and
762`vid <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/vid;h=e91788c8504f2da12c086e802e1e7e8648418c66;hb=refs/heads/master>`__.
763If your components isn't already deployed by OOM, you can create your
764own set of deployment specifications that can be easily added to OOM.
765
766Development Deployments
767~~~~~~~~~~~~~~~~~~~~~~~
768
769For the Amsterdam release, the deployment specifications represent a
770simple simplex deployment of ONAP that may not have the robustness
771typically required of a full operational deployment.  Follow on releases
772will enhance these deployment specifications as follows:
773
774- Load Balancers - kubernets has built in support for user defined or
775 simple 'ingress' load balances at the service layer to hide the
776 complexity of multi-pod deployments from other components.
777
778- Horizontal Scaling - replica sets can be used to dynamically scale
779 the number of pods behind a service to that of the offered load.
780
781- Stateless Pods - using concepts such as DBaaS (database as a service)
782 database technologies could be removed (where appropriate) from the
783 services thus moving to the 'cattle' model so common in cloud
784 deployments.
785
786Kubernetes Under-Cloud Deployments
787==================================
788
789The automated ONAP deployment depends on a fully functional kubernetes
790environment being available prior to ONAP installation. Fortunately,
791kubenetes is supported on a wide variety of systems such as Google
792Compute Engine, `AWS
793EC2 <https://wiki.onap.org/display/DW/ONAP+on+AWS#ONAPonAWS-Option0:DeployOOMKubernetestoaspotVM>`__,
794Microsoft Azure, CenturyLink Cloud, IBM Bluemix and more.  If you're
795setting up your own kubernetes environment, please refer to \ `ONAP on
796Kubernetes <file:///C:\display\DW\ONAP+on+Kubernetes>`__ for a walk
797through of how to set this environment up on several platforms.
798
799ONAP 'OneClick' Deployment Walk-though
800======================================
801
802Once a kubernetes environment is available and the deployment artifacts
803have been customized for your location, ONAP is ready to be installed. 
804
805The first step is to setup
806the \ `/oom/kubernetes/config/onap-parameters.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/config/onap-parameters.yaml;h=7ddaf4d4c3dccf2fad515265f0da9c31ec0e64b1;hb=refs/heads/master>`__ file
807with key-value pairs specific to your OpenStack environment.  There is
808a \ `sample  <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/config/onap-parameters-sample.yaml;h=3a74beddbbf7f9f9ec8e5a6abaecb7cb238bd519;hb=refs/heads/master>`__\ that
809may help you out or even be usable directly if you don't intend to
810actually use OpenStack resources.  Here is the contents of this file:
811
812**onap-parameters-sample.yaml**
813
814 .. literalinclude:: https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/config/onap-parameters-sample.yaml;hb=refs/heads/master
815
816OPENSTACK\_UBUNTU\_14\_IMAGE: "Ubuntu\_14.04.5\_LTS"
817
818OPENSTACK\_PUBLIC\_NET\_ID: "e8f51956-00dd-4425-af36-045716781ffc"
819
820OPENSTACK\_OAM\_NETWORK\_ID: "d4769dfb-c9e4-4f72-b3d6-1d18f4ac4ee6"
821
822OPENSTACK\_OAM\_SUBNET\_ID: "191f7580-acf6-4c2b-8ec0-ba7d99b3bc4e"
823
824OPENSTACK\_OAM\_NETWORK\_CIDR: "192.168.30.0/24"
825
826OPENSTACK\_USERNAME: "vnf\_user"
827
828OPENSTACK\_API\_KEY: "vnf\_password"
829
830OPENSTACK\_TENANT\_NAME: "vnfs"
831
832OPENSTACK\_REGION: "RegionOne"
833
834OPENSTACK\_KEYSTONE\_URL: "http://1.2.3.4:5000"
835
836OPENSTACK\_FLAVOUR\_MEDIUM: "m1.medium"
837
838OPENSTACK\_SERVICE\_TENANT\_NAME: "services"
839
840DMAAP\_TOPIC: "AUTO"
841
842DEMO\_ARTIFACTS\_VERSION: "1.1.0-SNAPSHOT"
843
844Note that these values are required or the following steps will fail.
845
846In-order to be able to support multiple ONAP instances within a single
847kubernetes environment a configuration set is required.  The
848`createConfig.sh <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/config/createConfig.sh;h=f226ccae47ca6de15c1da49be4b8b6de974895ed;hb=refs/heads/master>`__
849script is used to do this.
850
851**createConfig.sh**::
852
853 > ./createConfig.sh -n onapTrial
854
855The bash
856script \ `createAll.bash <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/oneclick/createAll.bash;h=5e5f2dc76ea7739452e757282e750638b4e3e1de;hb=refs/heads/master>`__ is
857used to create an ONAP deployment with kubernetes. It has two primary
858functions:
859
860- Creating the namespaces used to encapsulate the ONAP components, and
861
862- Creating the services, pods and containers within each of these
863 namespaces that provide the core functionality of ONAP.
864
865**createAll.bash**::
866
867 > ./createAll.bash -n onapTrial
868
869Namespaces provide isolation between ONAP components as ONAP release 1.0
870contains duplicate application (e.g. mariadb) and port usage. As
871such createAll.bash requires the user to enter a namespace prefix string
872that can be used to separate multiple deployments of onap. The result
873will be set of 10 namespaces (e.g. onapTrial-sdc, onapTrial-aai,
874onapTrial-mso, onapTrial-message-router, onapTrial-robot, onapTrial-vid,
875onapTrial-sdnc, onapTrial-portal, onapTrial-policy, onapTrial-appc)
876being created within the kubernetes environment.  A prerequisite pod
877config-init (\ `pod-config-init.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/config/pod-config-init.yaml;h=b1285ce21d61815c082f6d6aa3c43d00561811c7;hb=refs/heads/master>`__)
878may editing to match you environment and deployment into the default
879namespace before running createAll.bash.
880
881Integration with MSB
882====================
883
884The \ `Microservices Bus
HuabingZhao329c1392017-10-30 08:59:48 +0800885Project <https://wiki.onap.org/pages/viewpage.action?pageId=3246982>`__ provides
rvyasea507b52017-09-25 14:29:47 -0400886facilities to integrate micro-services into ONAP and therefore needs to
887integrate into OOM - primarily through Consul which is the backend of
888MSB service discovery. The following is a brief description of how this
HuabingZhao329c1392017-10-30 08:59:48 +0800889integration will be done:
rvyasea507b52017-09-25 14:29:47 -0400890
891A registrator to push the service endpoint info to MSB service
892discovery. 
893
894- The needed service endpoint info is put into the kubernetes yaml file
895 as annotation, including service name, Protocol,version, visual
896 range,LB method, IP, Port,etc.
897
898- OOM deploy/start/restart/scale in/scale out/upgrade ONAP components
899
900- Registrator watch the kubernetes event
901
902- When an ONAP component instance has been started/destroyed by OOM,
903 Registrator get the notification from kubernetes
904
905- Registrator parse the service endpoint info from annotation and
906 register/update/unregister it to MSB service discovery
907
908- MSB API Gateway uses the service endpoint info for service routing
909 and load balancing.
910
911Details of the registration service API can be found at \ `Microservice
912Bus API
HuabingZhao329c1392017-10-30 08:59:48 +0800913Documentation <https://wiki.onap.org/display/DW/Microservice+Bus+API+Documentation>`__.
rvyasea507b52017-09-25 14:29:47 -0400914
915How to define the service endpoints using annotation \ `ONAP Services
HuabingZhao329c1392017-10-30 08:59:48 +0800916List#OOMIntegration <https://wiki.onap.org/display/DW/ONAP+Services+List#ONAPServicesList-OOMIntegration>`__
rvyasea507b52017-09-25 14:29:47 -0400917
918A preliminary view of the OOM-MSB integration is as follows:
919
920.. figure:: ../MSB-OOM-Diagram.png
921
922A message sequence chart of the registration process:
923
924.. figure:: ../MSB-OOM-MSC.png
925
926MSB Usage Instructions
927----------------------
HuabingZhao329c1392017-10-30 08:59:48 +0800928MSB provides kubernetes charts in OOM, so it can be spun up by oom oneclick command.
rvyasea507b52017-09-25 14:29:47 -0400929
HuabingZhao329c1392017-10-30 08:59:48 +0800930Please note that kubernetes authentication token must be set at *kubernetes/kube2msb/values.yaml* so the kube2msb registrator can get the access to watch the kubernetes events and get service annotation by kubernetes APIs. The token can be found in the kubectl configuration file *~/.kube/config*
rvyasea507b52017-09-25 14:29:47 -0400931
HuabingZhao329c1392017-10-30 08:59:48 +0800932MSB and kube2msb can be spun up with all the ONAP components together, or separately using the following commands.
rvyasea507b52017-09-25 14:29:47 -0400933
HuabingZhao329c1392017-10-30 08:59:48 +0800934**Start MSB services**::
rvyasea507b52017-09-25 14:29:47 -0400935
HuabingZhao329c1392017-10-30 08:59:48 +0800936 createAll.bash -n onap -a msb
rvyasea507b52017-09-25 14:29:47 -0400937
HuabingZhao329c1392017-10-30 08:59:48 +0800938**Start kube2msb registrator**::
rvyasea507b52017-09-25 14:29:47 -0400939
HuabingZhao329c1392017-10-30 08:59:48 +0800940 createAll.bash -n onap -a kube2msb
rvyasea507b52017-09-25 14:29:47 -0400941
HuabingZhao329c1392017-10-30 08:59:48 +0800942More details can be found here `MSB installation <http://onap.readthedocs.io/en/latest/submodules/msb/apigateway.git/docs/platform/installation.html>`__.
rvyasea507b52017-09-25 14:29:47 -0400943
944FAQ (Frequently Asked Questions)
945================================
946
947Does OOM enable the deployment of VNFs on containers?
948
949- No. OOM provides a mechanism to instantiate and manage the ONAP
950 components themselves with containers but does not provide a
951 Multi-VIM capability such that VNFs can be deployed into containers.
952  The Multi VIM/Cloud Project may provide this functionality at some point.
953
954Configuration Parameters
955========================
956
957Configuration parameters that are specific to the ONAP deployment, for example
958hard coded IP addresses, are parameterized and stored in a OOM specific
959set of configuration files.
960
961More information about ONAP configuration can be found in the Configuration Management
962section.
963
964References
965==========
966
967- Docker - http://docker.com
968
969- Kubernetes - http://kubernetes.io
970
971- Helm - https://helm.sh