blob: 601224167de494b5e6611e5b83220caf15477a49 [file] [log] [blame]
Samuli Silviusa58b0b42020-02-18 10:00:47 +02001.. This work is licensed under a Creative Commons Attribution 4.0 International License.
2.. http://creativecommons.org/licenses/by/4.0
3.. Copyright 2020 ONAP
4
5.. _docs_vFW_CNF_CDS:
6
7.. contents::
8 :depth: 4
9..
10
11vFirewall CNF Use Case
12----------------------
13
14Source files
15~~~~~~~~~~~~
16- Heat/Helm/CDS models: `vFW_CNF_CDS Model`_
17
18Description
19~~~~~~~~~~~
20This use case is a combination of `vFW CDS Dublin`_ and `vFW EDGEX K8S`_ use cases. The aim is to continue improving Kubernetes based Network Functions (a.k.a CNF) support in ONAP. Use case continues where `vFW EDGEX K8S`_ left and brings CDS support into picture like `vFW CDS Dublin`_ did for the old vFW Use case. Predecessor use case is also documented here `vFW EDGEX K8S In ONAP Wiki`_.
21
22In a higher level this use case brings only one improvement yet important one i.e. the ability to instantiate more than single CNF instance of same type (with same Helm package).
23
24Following improvements were made:
25
26- Changed vFW Kubernetes Helm charts to support overrides (previously mostly hardcode values)
27- Combined all models (Heat, Helm, CBA) in to same git repo and a creating single CSAR package `vFW_CNF_CDS Model`_
28- Compared to `vFW EDGEX K8S`_ use case **MACRO** workflow in SO is used instead of VNF workflow. (this is general requirement to utilize CDS as part of flow)
29- CDS is used to resolve instantion time parameters (Helm override)
30 - Ip addresses with IPAM
31 - Unique names for resources with ONAP naming service
32- Multicloud/k8s plugin changed to support identifiers of vf-module concept
33- CDS is used to create **multicloud/k8s profile** as part of instantiation flow (previously manual step)
34
35Use case does not contain Closed Loop part of the vFW demo.
36
37The vFW CNF Use Case
38~~~~~~~~~~~~~~~~~~~~
39The vFW CNF CDS use case shows how to instantiate multiple CNF instances similar way as VNFs bringing CNFs closer to first class citizens in ONAP.
40
41One of the biggest practical change compared to old demos (any onap demo) is that whole network function content (user provided content) is collected to one place and more importantly into git repository (`vFW_CNF_CDS Model`_) that provides version control (that is pretty important thing). That is very basic thing but unfortunately this is a common problem when running any ONAP demo and trying to find all content from many different git repos and even some files only in ONAP wiki.
42
43Demo git directory has also `Data Dictionary`_ file (CDS model time resource) included.
44
45Another founding idea from the start was to provide complete content in single CSAR available directly from that git repository. Not any revolutionary idea as that's the official package format ONAP supports and all content supposed to be in that same package for single service regardless of the models and closed loops and configurations etc.
46
47Following table describes all source models to which this demo is based on.
48
49=============== ================= ===========
50Model Git reference Description
51--------------- ----------------- -----------
52Heat `vFW_NextGen`_ Heat templates used in original vFW demo but splitted into multiple vf-modules
53Helm `vFW_Helm Model`_ Helm templates used in `vFW EDGEX K8S`_ demo
54CDS model `vFW CBA Model`_ CDS CBA model used in `vFW CDS Dublin`_ demo
55=============== ================= ===========
56
57All changes to related ONAP components during this use case can be found from this `Jira Epic`_ ticket.
58
59Modeling CSAR/Helm
60..................
61
62The starting point for this demo was Helm package containing one Kubernetes application, see `vFW_Helm Model`_. In this demo we decided to follow SDC/SO vf-module concept same way as original vFW demo was splitted into multiple vf-modules instead of one (`vFW_NextGen`_). Same way we splitted Helm version of vFW into multiple Helm packages each matching one vf-module.
63
64Produced CSAR package has following MANIFEST file (csar/MANIFEST.json) having all Helm packages modeled as dummy Heat resources matching to vf-module concept (that is originated from Heat), so basically each Helm application is visible to ONAP as own vf-module. Actual Helm package is delivered as CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACTS package through SDC and SO.
65
66CDS model (CBA package) is delivered as SDC supported own type CONTROLLER_BLUEPRINT_ARCHIVE.
67
68::
69
70 {
71 "name": "virtualFirewall",
72 "description": "",
73 "data": [
74 {
75 "file": "vFW_CDS_CNF.zip",
76 "type": "CONTROLLER_BLUEPRINT_ARCHIVE"
77 },
78 {
79 "file": "base_template.yaml",
80 "type": "HEAT",
81 "isBase": "true",
82 "data": [
83 {
84 "file": "base_template.env",
85 "type": "HEAT_ENV"
86 }
87 ]
88 },
89 {
90 "file": "base_template_cloudtech_k8s_charts.tgz",
91 "type": "CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACTS"
92 },
93 {
94 "file": "vfw.yaml",
95 "type": "HEAT",
96 "isBase": "false",
97 "data": [
98 {
99 "file": "vfw.env",
100 "type": "HEAT_ENV"
101 }
102 ]
103 },
104 {
105 "file": "vfw_cloudtech_k8s_charts.tgz",
106 "type": "CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACTS"
107 },
108 {
109 "file": "vpkg.yaml",
110 "type": "HEAT",
111 "isBase": "false",
112 "data": [
113 {
114 "file": "vpkg.env",
115 "type": "HEAT_ENV"
116 }
117 ]
118 },
119 {
120 "file": "vpkg_cloudtech_k8s_charts.tgz",
121 "type": "CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACTS"
122 },
123 {
124 "file": "vsn.yaml",
125 "type": "HEAT",
126 "isBase": "false",
127 "data": [
128 {
129 "file": "vsn.env",
130 "type": "HEAT_ENV"
131 }
132 ]
133 },
134 {
135 "file": "vsn_cloudtech_k8s_charts.tgz",
136 "type": "CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACTS"
137 }
138 ]
139 }
140
141Multicloud/k8s
142..............
143
144K8s plugin was changed to support new way to identify k8s application and related multicloud/k8s profile.
145
146Changes done:
147
148- SDC distribution broker
149
150 **TODO: content here**
151
152- K8S plugin APIs changed to use VF Module Model Identifiers
153
154 Previously K8S plugin's used user given values in to identify object created/modified. Names were basing on VF-Module's "model-name"/"model-version" like "VfwLetsHopeLastOne..vfw..module-3" and "1". SO request has user_directives from where values was taken.
155
156 **VF Module Model Invariant ID** and **VF Module Model Version ID** is now used to identify artifact in SO request to Multicloud/k8s plugin. This does not require user to give extra parameters for the SO request as vf-module related parameters are there already by default. `MULTICLOUD-941`_
157 Note that API endpoints are not changed but only the semantics.
158
159 *Examples:*
160
161 Definition
162
163 ::
164
165 /api/multicloud-k8s/v1/v1/rb/definition/{VF Module Model Invariant ID}/{VF Module Model Version ID}/content
166
167
168 Profile creation API
169
170 ::
171
172 curl -i -d @create_rbprofile.json -X POST http://${K8S_NODE_IP}:30280/api/multicloud-k8s/v1/v1/rb/definition/{VF Module Model Invariant ID}/{VF Module Model Version ID}/profile
173 { "rb-name": “{VF Module Model Invariant ID}",
174 "rb-version": "{VF Module Model Version ID}",
175 "profile-name": "p1",
176 "release-name": "r1",
177 "namespace": "testns1",
178 "kubernetes-version": "1.13.5"
179 }
180
181 Upload Profile content API
182
183 ::
184
185 curl -i --data-binary @profile.tar.gz -X POST http://${K8S_NODE_IP}:30280/api/multicloud-k8s/v1/v1/rb/definition/{VF Module Model Invariant ID}/{VF Module Model Version ID}/profile/p1/content
186
187- Default override support was added to plugin
188
189 **TODO: Some content here, maybe also picture**
190
191CDS Model (CBA)
192...............
193
194Creating CDS model was the core of the use case work and also the most difficult and time consuming part. There are many reasons for this e.g.
195
196- CDS documentation (even being new component) is inadequate or non-existent for service modeler user. One would need to be CDS developer to be able to do something with it.
197- CDS documentation what exists is non-versioned (in ONAP wiki when should be in git) so it's mostly impossible to know what features are for what release.
198- Our little experience of CDS (not CDS developers)
199
200At first the target was to keep CDS model as close as possible to `vFW_CNF_CDS Model`_ use case model and only add smallest possible changes to enable also k8s usage. That is still the target but in practice model deviated from the original one already and time pressure pushed us to not care about sync. Basically the end result could be possible much streamlined if wanted to be smallest possible to working only for K8S based network functions.
201
202As K8S application was splitted into multiple Helm packages to match vf-modules, CBA modeling follows the same and for each vf-module there's own template in CBA package.
203
204::
205
206 "artifacts" : {
207 "base_template-template" : {
208 "type" : "artifact-template-velocity",
209 "file" : "Templates/base_template-template.vtl"
210 },
211 "base_template-mapping" : {
212 "type" : "artifact-mapping-resource",
213 "file" : "Templates/base_template-mapping.json"
214 },
215 "vpkg-template" : {
216 "type" : "artifact-template-velocity",
217 "file" : "Templates/vpkg-template.vtl"
218 },
219 "vpkg-mapping" : {
220 "type" : "artifact-mapping-resource",
221 "file" : "Templates/vpkg-mapping.json"
222 },
223 "vfw-template" : {
224 "type" : "artifact-template-velocity",
225 "file" : "Templates/vfw-template.vtl"
226 },
227 "vfw-mapping" : {
228 "type" : "artifact-mapping-resource",
229 "file" : "Templates/vfw-mapping.json"
230 },
231 "vnf-template" : {
232 "type" : "artifact-template-velocity",
233 "file" : "Templates/vnf-template.vtl"
234 },
235 "vnf-mapping" : {
236 "type" : "artifact-mapping-resource",
237 "file" : "Templates/vnf-mapping.json"
238 },
239 "vsn-template" : {
240 "type" : "artifact-template-velocity",
241 "file" : "Templates/vsn-template.vtl"
242 },
243 "vsn-mapping" : {
244 "type" : "artifact-mapping-resource",
245 "file" : "Templates/vsn-mapping.json"
246 }
247 }
248
249Only **resource-assignment** workflow of the CBA model is utilized in this demo. If final CBA model contains also **config-deploy** workflow it's there just to keep parity with original vFW CBA (for VMs). Same applies for the related template *Templates/nf-params-template.vtl* and it's mapping file.
250
251The interesting part on CBA model is the **profile-upload** sub step of imperative workflow where Kotlin script is used to upload K8S profile into multicloud/k8s API.
252
253::
254
255 "profile-upload" : {
256 "type" : "component-script-executor",
257 "interfaces" : {
258 "ComponentScriptExecutor" : {
259 "operations" : {
260 "process" : {
261 "inputs" : {
262 "script-type" : "kotlin",
263 "script-class-reference" : "org.onap.ccsdk.cds.blueprintsprocessor.services.execution.scripts.K8sProfileUpload",
264 "dynamic-properties" : "*profile-upload-properties"
265 }
266 }
267 }
268 }
269 }
270 }
271
272Kotlin script expects that K8S profile package named like "k8s-rb-profile-name".tar.gz is present in CBA "Templates/k8s-profiles directory" where "k8s-rb-profile-name" is one of the CDS resolved parameters (user provides as input parameter).
273
274**TODO: something about the content and structure of profile package**
275
276As `Data Dictionary`_ is also included into demo git directory, re-modeling and making changes into model utilizing CDS model time / runtime is easier as used DD is also known.
277
278UAT
279+++
280
281During testing of the use case **uat.yml** file was recorded according to `CDS UAT Testing`_ instructions. Generated uat.yml is stored within CBA package into **Tests** folder.
282
283Recorded uat.yml is an example run with example values (the values we used when demo was run) and can be used later to test CBA model in isolation (unit test style). This is very useful when changes are made to CBA model and those changes are needed to be tested fast. With uat.yml file only CDS is needed as all external interfaces are mocked. However, note that mocking is possible for REST interfaces only (e.g. Netconf is not supported).
284
285Another benefit of uat.yml is that it documents the runtime functionality of the CBA.
286
287To verify CBA with uat.yaml and CDS runtime do following:
288
289- Enable UAT testing for CDS runtime
290
291 ::
292
293 kubectl -n onap edit deployment onap-cds-cds-blueprints-processor
294
295 # add env variable for cds-blueprints-processor container:
296 name: spring_profiles_active
297 value: uat
298
299- Spy CBA functionality with UAT initial seed file
300
301::
302
303 curl -X POST -u ccsdkapps:ccsdkapps -F cba=@my_cba.zip -F uat=@input_uat.yaml http://<kube-node>:30499/api/v1/uat/spy
304
305where my_cba.zip is the original cba model and input_uat.yml is following in this use case:
306
307::
308
309 %YAML 1.1
310 ---
311 processes:
312 - name: resource-assignment for vnf
313 request:
314 commonHeader: &commonHeader
315 originatorId: SDNC_DG
316 requestId: "98397f54-fa57-485f-a04e-1e220b7b1779"
317 subRequestId: "6bfca5dc-993d-48f1-ad27-a7a9ea91836b"
318 actionIdentifiers: &actionIdentifiers
319 blueprintName: vFW_CNF_CDS
320 blueprintVersion: "1.0.7"
321 actionName: resource-assignment
322 mode: sync
323 payload:
324 resource-assignment-request:
325 template-prefix:
326 - "vnf"
327 resource-assignment-properties:
328 service-instance-id: &service-id "0362acff-38e7-4ecc-8ac0-4780161f3ca0"
329 vnf-model-customization-uuid: &vnf-model-cust-uuid "366c007e-7684-4a0b-a2f4-9815174bec55"
330 vnf-id: &vnf-id "6bfca5dc-993d-48f1-ad27-a7a9ea91836b"
331 aic-cloud-region: &cloud-region "k8sregionfour"
332 - name: resource-assignment for base_template
333 request:
334 commonHeader: *commonHeader
335 actionIdentifiers: *actionIdentifiers
336 payload:
337 resource-assignment-request:
338 template-prefix:
339 - "base_template"
340 resource-assignment-properties:
341 nfc-naming-code: "base_template"
342 k8s-rb-profile-name: &k8s-profile-name "vfw-cnf-cds-base-profile"
343 service-instance-id: *service-id
344 vnf-id: *vnf-id
345 vf-module-model-customization-uuid: "603eadfe-50d6-413a-853c-46f5a8e2ddc7"
346 vnf-model-customization-uuid: *vnf-model-cust-uuid
347 vf-module-id: "34c190c7-e5bc-4e61-a0d9-5fd44416dd96"
348 aic-cloud-region: *cloud-region
349 - name: resource-assignment for vpkg
350 request:
351 commonHeader: *commonHeader
352 actionIdentifiers: *actionIdentifiers
353 payload:
354 resource-assignment-request:
355 template-prefix:
356 - "vpkg"
357 resource-assignment-properties:
358 nfc-naming-code: "vpkg"
359 k8s-rb-profile-name: *k8s-profile-name
360 service-instance-id: *service-id
361 vnf-id: *vnf-id
362 vf-module-model-customization-uuid: "32ffad03-d38d-46d5-b4a6-a3b0b6112ffc"
363 vnf-model-customization-uuid: *vnf-model-cust-uuid
364 vf-module-id: "0b3c70f3-a462-4340-b08f-e39f6baa364e"
365 aic-cloud-region: *cloud-region
366 - name: resource-assignment for vsn
367 request:
368 commonHeader: *commonHeader
369 actionIdentifiers: *actionIdentifiers
370 payload:
371 resource-assignment-request:
372 template-prefix:
373 - "vsn"
374 resource-assignment-properties:
375 nfc-naming-code: "vsn"
376 k8s-rb-profile-name: *k8s-profile-name
377 service-instance-id: *service-id
378 vnf-id: *vnf-id
379 vf-module-model-customization-uuid: "f75c3628-12e9-4c70-be98-d347045a3f70"
380 vnf-model-customization-uuid: *vnf-model-cust-uuid
381 vf-module-id: "960c9189-4a68-49bc-8bef-88e621fef250"
382 aic-cloud-region: *cloud-region
383 - name: resource-assignment for vfw
384 request:
385 commonHeader: *commonHeader
386 actionIdentifiers: *actionIdentifiers
387 payload:
388 resource-assignment-request:
389 template-prefix:
390 - "vfw"
391 resource-assignment-properties:
392 nfc-naming-code: "vfw"
393 k8s-rb-profile-name: *k8s-profile-name
394 service-instance-id: *service-id
395 vnf-id: *vnf-id
396 vf-module-model-customization-uuid: "f9afd9bb-7796-4aff-8f53-681513115742"
397 vnf-model-customization-uuid: *vnf-model-cust-uuid
398 vf-module-id: "1ff35d90-623b-450e-abb2-10a515249fbe"
399 aic-cloud-region: *cloud-region
400
401
402**NOTE:** This call will run all the calls (given in input_uat.yml) towards CDS and records the functionality, so there needs to be working environment (SDNC, AAI, Naming, Netbox, etc.) to record valid final uat.yml.
403As an output of this call final uat.yml content is received. Final uat.yml in this use case looks like this:
404
405::
406
407 TODO: the content.
408
409Currently UAT is broken in master `CCSDK-2155`_
410
411- Verify CBA with UAT
412
413 ::
414
415 curl -X POST -u ccsdkapps:ccsdkapps -F cba=@my_cba.zip http://<kube-node>:30499/api/v1/uat/verify
416
417where my_cba.zip is the CBA model with uat.yml (generated in spy step) inside Test folder.
418
419**TODO: add UAT POST to postman**
420
421Instantiation Overview
422......................
423
424The figure below shows all the interactions that take place during vFW CNF instantiation. It's not describing flow of actions (ordered steps) but rather component dependencies.
425
426.. figure:: files/vFW_CNF_CDS/Instantiation_topology.png
427 :align: center
428
429 vFW CNF CDS Use Case Runtime interactions.
430
431PART 1 - ONAP Installation
432--------------------------
4331-1 Deployment components
434~~~~~~~~~~~~~~~~~~~~~~~~~
435
436In order to run the vFW_CNF_CDS use case, we need ONAP Frankfurt Release (or later) and at least following components:
437
438======================================================= ===========
439ONAP Component name Describtion
440------------------------------------------------------- -----------
441AAI Required for Inventory Cloud Owner, Customer, Owning Entity, Service, Generic VNF, VF Module
442SDC VSP, VF and Service Modeling of the CNF
443DMAAP Distribution of the CSAR including CBA to all ONAP components
444SO Requires for Macro Orchestration using the generic building blocks
445CDS Resolution of cloud parameters including Helm override parameters for the CNF. Creation of the multicloud/k8s profile for CNF instantion.
446SDNC (needs to include netbox and Naming Generation mS) Provides GENERIC-RESOURCE-API for cloud Instantiation orchestration via CDS.
447Policy Used to Store Naming Policy
448AAF Used for Authentication and Authorization of requests
449Portal Required to access SDC.
450MSB Exposes multicloud interfaces used by SO.
451Multicloud K8S plugin part used to pass SO instanttion requests to external Kubernetes cloud region.
452Robot Optional. Can be used for running automated tasks, like provisioning cloud customer, cloud region, service subscription, etc ..
453Shared Cassandra DB Used as a shared storage for ONAP components that rely on Cassandra DB, like AAI
454Shared Maria DB Used as a shared storage for ONAP components that rely on Maria DB, like SDNC, and SO
455======================================================= ===========
456
4571-2 Deployment
458~~~~~~~~~~~~~~
459
460In order to deploy such an instance, follow the `ONAP Deployment Guide`_
461
462As we can see from the guide, we can use an override file that helps us customize our ONAP deployment, without modifying the OOM Folder, so you can download this override file here, that includes the necessary components mentioned above.
463
464**override.yaml** file where enabled: true is set for each component needed in demo (by default all components are disabled).
465
466::
467
468 aai:
469 enabled: true
470 aaf:
471 enabled: true
472 cassandra:
473 enabled: true
474 cds:
475 enabled: true
476 contrib:
477 enabled: true
478 dmaap:
479 enabled: true
480 mariadb-galera:
481 enabled: true
482 msb:
483 enabled: true
484 multicloud:
485 enabled: true
486 policy:
487 enabled: true
488 portal:
489 enabled: true
490 robot:
491 enabled: true
492 sdc:
493 enabled: true
494 sdnc:
495 enabled: true
496 so:
497 enabled: true
498
499Then deploy ONAP with Helm with your override file.
500
501::
502
503 helm deploy onap local/onap --namespace onap -f ~/override.yaml
504
505In case redeployment needed `Helm Healer`_ could be a faster and convenient way to redeploy.
506
507::
508
509 helm-healer.sh -n onap -f ~/override.yaml -s /dockerdata-nfs --delete-all
510
511Or redeploy (clean re-deploy also data removed) just wanted components (Helm releases), cds in this example.
512
513::
514
515 helm-healer.sh -f ~/override.yaml -s /dockerdata-nfs/ -n onap -c onap-cds
516
517There are many instructions in ONAP wiki how to follow your deployment status and does it succeeded or not, mostly using Robot Health checks. One way we used is to skip the outermost Robot wrapper and use directly ete-k8s.sh to able to select checked components easily. Script is found from OOM git repository *oom/kubernetes/robot/ete-k8s.sh*.
518
519::
520
521 for comp in {aaf,aai,dmaap,msb,multicloud,policy,portal,sdc,sdnc,so}; do
522 if ! ./ete-k8s.sh onap health-$comp; then
523 failed=$failed,$comp
524 fi
525 done
526 if [ -n "$failed" ]; then
527 echo "These components failed: $failed"
528 false
529 else
530 echo "Healthcheck successful"
531 fi
532
533And check status of pods, deployments, jobs etc.
534
535::
536
537 kubectl -n onap get pods | grep -vie 'completed' -e 'running'
538 kubectl -n onap get deploy,sts,jobs
539
540
5411-3 Post Deployment
542~~~~~~~~~~~~~~~~~~~
543
544After completing the first part above, we should have a functional ONAP deployment for the Frankfurt Release.
545
546We will need to apply a few modifications to the deployed ONAP Frankfurt instance in order to run the use case.
547
548Postman collection setup
549........................
550
551In this demo we have on purpose created all manual ONAP preparation steps (which in real life are automated) by using Postman so it will be clear what exactly is needed. Some of the steps like AAI population is automated by Robot scripts in other ONAP demos (**./demo-k8s.sh onap init**) and Robot script could be used for many parts also in this demo. Later when this demo is fully automated we probably update also Robot scripts to support this demo.
552
553Postman collection is used also to trigger instantion using SO APIs.
554
555Following steps are needed to setup postman:
556
557- Import this postman collection zip
558
559 :download: `postman.zip`_
560
561- Extract the zip and import 2 postman cllection and environment files into Postman
562 - `vFW_CNF_CDS.postman_collection.json`
563 - `vFW_CNF_CDS.postman_environment.json`
564
565- For use case debugging purposes to get Kubernetes cluster external access to SO CatalogDB (GET operations only), modify SO CatalogDB service to NodePort instead of ClusterIP. You may also create separate own NodePort if you wish, but here we have just edited directly the service with kubectl. Note that the port number 30120 is used in postman collection.
566
567::
568
569 kubectl -n onap edit svc so-catalog-db-adapter
570 - .spec.type: ClusterIP
571 + .spec.type: NodePort
572 + .spec.ports[0].nodePort: 30120
573
574**Postman variables:**
575
576Most of the postman variables are automated by postman scripts and environment file provided, but there are few mandatory variables to fill by user.
577
578=================== ===================
579Variable Description
580------------------- -------------------
581k8s ONAP Kubernetes host
582sdnc_port port of sdnc service for accessing MDSAL
583cds-service-name name of service as defined in SDC
584cds-instance-name name of instantiated service (if ending with -{num}, will be autoincremented for each instantiation request)
585=================== ===================
586
587You can get the sdnc_port value with
588
589::
590
591 kubectl -n onap get svc sdnc -o json | jq '.spec.ports[]|select(.port==8282).nodePort'
592
593
594**TODO: change variable names something else than cds-xxx**
595
596
597AAI
598...
599
600Some basic entries are needed in ONAP AAI. These entries are needed ones per onap installation and do not need to be repeated when running multiple demos based on same definitions.
601
602Create all these entries into AAI in this order. Postman collection provided in this demo can be used for creating each entry.
603
604**Postman -> Robot Init Stuff**
605
606- Create Customer
607- Create Owning-entity
608- Create Platform
609- Create Project
610- Create Line Of Business
611
612Corresponding GET operations in postman can be used to verify entries created. Postman collection also includes some code that tests/verifies some basic issues e.g. gives error if entry already exists.
613
614SO Cloud region configuration
615.............................
616
617SO database needs to (manually) modified for SO to know that this particular cloud region is to be handled by multicloud. Values we insert needs to obviously match to the ones we populated into AAI.
618
619The related code part in SO is here: `SO Cloud Region Selection`_
620It's possible improvement place in SO to rather get this information directly from AAI.
621
622::
623
624 kubectl -n onap exec onap-mariadb-galera-mariadb-galera-0 -it -- mysql -uroot -psecretpassword -D catalogdb
625 select * from cloud_sites;
626 insert into cloud_sites(ID, REGION_ID, IDENTITY_SERVICE_ID, CLOUD_VERSION, CLLI, ORCHESTRATOR) values("k8sregionfour", "k8sregionfour", "DEFAULT_KEYSTONE", "2.5", "clli2", "multicloud");
627 select * from cloud_sites;
628 exit
629
630SO BPMN endpoint fix for VNF adapter requests (v1 -> v2)
631........................................................
632
633SO Openstack adapter needs to be updated to use newer version. Here is also possible improvement area in SO. Openstack adapter is confusing in context of this use case as VIM is not Openstack but Kubernetes cloud region. In this use case we did not used Openstack at all.
634
635::
636
637 kubectl -n onap edit configmap onap-so-so-bpmn-infra-app-configmap
638 - .data."override.yaml".mso.adapters.vnf.rest.endpoint: http://so-openstack-adapter.onap:8087/services/rest/v1/vnfs
639 + .data."override.yaml".mso.adapters.vnf.rest.endpoint: http://so-openstack-adapter.onap:8087/services/rest/v2/vnfs
640 kubectl -n onap delete pod -l app=so-bpmn-infra
641
642Naming Policy
643.............
644
645Naming policy is needed to generate unique names for all instance time resources that are wanted to be modeled in the way naming policy is used. Those are normally VNF, VNFC and VF-module names, network names etc. Naming is general ONAP feature and not limited to this use case.
646
647The override.yaml file above has an option **"preload=true"**, that will tell the POLICY component to run the push_policies.sh script as the POLICY PAP pod starts up, which will in turn create the Naming Policy and push it.
648
649To check that the naming policy is created and pushed OK, we can run the commands below.
650
651::
652
653 # goto inside of a POD e.g. pap here
654 kubectl -n onap exec -it $(kubectl -n onap get pods -l app=pap --no-headers | cut -d" " -f1) bash
655
656 bash-4.4$ curl -k --silent -X POST \
657 --header 'Content-Type: application/json' \
658 --header 'ClientAuth: cHl0aG9uOnRlc3Q=' \
659 --header 'Authoment: TEST' \
660 -d '{ "policyName": "SDNC_Policy.Config_MS_ONAP_VNF_NAMING_TIMESTAMP.1.xml"}' \
661 'https://pdp:8081/pdp/api/getConfig'
662
663 [{"policyConfigMessage":"Config Retrieved! ","policyConfigStatus":"CONFIG_RETRIEVED",
664 "type":"JSON",
665 "config":"{\"service\":\"SDNC-GenerateName\",\"version\":\"CSIT\",\"content\":{\"policy-instance-name\":\"ONAP_VNF_NAMING_TIMESTAMP\",\"naming-models\":[{\"naming-properties\":[{\"property-name\":\"AIC_CLOUD_REGION\"},{\"property-name\":\"CONSTANT\",\"property-value\":\"ONAP-NF\"},{\"property-name\":\"TIMESTAMP\"},{\"property-value\":\"_\",\"property-name\":\"DELIMITER\"}],\"naming-type\":\"VNF\",\"naming-recipe\":\"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP\"},{\"naming-properties\":[{\"property-name\":\"VNF_NAME\"},{\"property-name\":\"SEQUENCE\",\"increment-sequence\":{\"max\":\"zzz\",\"scope\":\"ENTIRETY\",\"start-value\":\"001\",\"length\":\"3\",\"increment\":\"1\",\"sequence-type\":\"alpha-numeric\"}},{\"property-name\":\"NFC_NAMING_CODE\"},{\"property-value\":\"_\",\"property-name\":\"DELIMITER\"}],\"naming-type\":\"VNFC\",\"naming-recipe\":\"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE\"},{\"naming-properties\":[{\"property-name\":\"VNF_NAME\"},{\"property-value\":\"_\",\"property-name\":\"DELIMITER\"},{\"property-name\":\"VF_MODULE_LABEL\"},{\"property-name\":\"VF_MODULE_TYPE\"},{\"property-name\":\"SEQUENCE\",\"increment-sequence\":{\"max\":\"zzz\",\"scope\":\"PRECEEDING\",\"start-value\":\"01\",\"length\":\"3\",\"increment\":\"1\",\"sequence-type\":\"alpha-numeric\"}}],\"naming-type\":\"VF-MODULE\",\"naming-recipe\":\"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE\"}]}}",
666 "policyName":"SDNC_Policy.Config_MS_ONAP_VNF_NAMING_TIMESTAMP.1.xml",
667 "policyType":"MicroService",
668 "policyVersion":"1",
669 "matchingConditions":{"ECOMPName":"SDNC","ONAPName":"SDNC","service":"SDNC-GenerateName"},
670 "responseAttributes":{},
671 "property":null}]
672
673In case the policy is missing, we can manually create and push the SDNC Naming policy.
674
675::
676
677 # goto inside of a POD e.g. pap here
678 kubectl -n onap exec -it $(kubectl -n onap get pods -l app=pap --no-headers | cut -d" " -f1) bash
679
680 curl -k -v --silent -X PUT --header 'Content-Type: application/json' --header 'Accept: text/plain' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{
681 "configBody": "{ \"service\": \"SDNC-GenerateName\", \"version\": \"CSIT\", \"content\": { \"policy-instance-name\": \"ONAP_VNF_NAMING_TIMESTAMP\", \"naming-models\": [ { \"naming-properties\": [ { \"property-name\": \"AIC_CLOUD_REGION\" }, { \"property-name\": \"CONSTANT\",\"property-value\": \"ONAP-NF\"}, { \"property-name\": \"TIMESTAMP\" }, { \"property-value\": \"_\", \"property-name\": \"DELIMITER\" } ], \"naming-type\": \"VNF\", \"naming-recipe\": \"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP\" }, { \"naming-properties\": [ { \"property-name\": \"VNF_NAME\" }, { \"property-name\": \"SEQUENCE\", \"increment-sequence\": { \"max\": \"zzz\", \"scope\": \"ENTIRETY\", \"start-value\": \"001\", \"length\": \"3\", \"increment\": \"1\", \"sequence-type\": \"alpha-numeric\" } }, { \"property-name\": \"NFC_NAMING_CODE\" }, { \"property-value\": \"_\", \"property-name\": \"DELIMITER\" } ], \"naming-type\": \"VNFC\", \"naming-recipe\": \"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE\" }, { \"naming-properties\": [ { \"property-name\": \"VNF_NAME\" }, { \"property-value\": \"_\", \"property-name\": \"DELIMITER\" }, { \"property-name\": \"VF_MODULE_LABEL\" }, { \"property-name\": \"VF_MODULE_TYPE\" }, { \"property-name\": \"SEQUENCE\", \"increment-sequence\": { \"max\": \"zzz\", \"scope\": \"PRECEEDING\", \"start-value\": \"01\", \"length\": \"3\", \"increment\": \"1\", \"sequence-type\": \"alpha-numeric\" } } ], \"naming-type\": \"VF-MODULE\", \"naming-recipe\": \"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE\" } ] } }",
682 "policyName": "SDNC_Policy.ONAP_VNF_NAMING_TIMESTAMP",
683 "policyConfigType": "MicroService",
684 "onapName": "SDNC",
685 "riskLevel": "4",
686 "riskType": "test",
687 "guard": "false",
688 "priority": "4",
689 "description": "ONAP_VNF_NAMING_TIMESTAMP"
690 }' 'https://pdp:8081/pdp/api/createPolicy'
691
692 curl -k -v --silent -X PUT --header 'Content-Type: application/json' --header 'Accept: text/plain' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{
693 "pdpGroup": "default",
694 "policyName": "SDNC_Policy.ONAP_VNF_NAMING_TIMESTAMP",
695 "policyType": "MicroService"
696 }' 'https://pdp:8081/pdp/api/pushPolicy'
697
698
699Network Naming mS
700+++++++++++++++++
701
702There's a strange feature or bug in naming service still at ONAP Frankfurt and following hack needs to be done to make it work.
703
704::
705
706 # Go into naming service database pod
707 kubectl -n onap exec -it $(kubectl -n onap get pods --no-headers | grep onap-sdnc-nengdb-0 | cut -d" " -f1) bash
708
709 # Delete entries from EXTERNAL_INTERFACE table
710 mysql -unenguser -pnenguser123 nengdb -e 'delete from EXTERNAL_INTERFACE;'
711
712
713PART 2 - Installation of managed Kubernetes cluster
714---------------------------------------------------
715
716In this demo the target cloud region is a Kubernetes cluster of your choice basically just like with Openstack. ONAP platform is a bit too much hard wired to Openstack and it's visible in many demos.
717
7182-1 Installation of Managed Kubernetes
719~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
720
721In this demo we use Kubernetes deployment used by ONAP multicloud/k8s team to test their plugin features see `KUD readthedocs`_. There's also some outdated instructions in ONAP wiki `KUD in Wiki`_.
722
723KUD deployment is fully automated and also used in ONAP's CI/CD to automatically verify all `Multicloud k8s gerrit`_ commits (see `KUD Jenkins ci/cd verification`_) and that's quite good (and rare) level of automated integration testing in ONAP. KUD deployemnt is used as it's installation is automated and it also includes bunch of Kubernetes plugins used to tests various k8s plugin features. In addition to deployement, KUD repository also contains test scripts to automatically test multicloud/k8s plugin features. Those scripts are run in CI/CD.
724
725See `KUD subproject in github`_ for a list of additional plugins this Kubernetes deployment has. In this demo the tested CNF is dependent on following plugins:
726
727- ovn4nfv
728- Multus
729- Virtlet
730
731Follow instructions in `KUD readthedocs`_ and install target Kubernetes cluster in your favorite machine(s), simplest being just one machine. Your cluster nodes(s) needs to be accessible from ONAP Kuberenetes nodes.
732
7332-2 Cloud Registration
734~~~~~~~~~~~~~~~~~~~~~~
735
736Managed Kubernetes cluster is registered here into ONAP as one cloud region. This obviously is done just one time for this particular cloud. Cloud registration information is kept in AAI.
737
738Postman collection have folder/entry for each step. Execute in this order.
739
740**Postman -> AAI -> Create**
741
742- Create Complex
743- Create Cloud Region
744- Create Complex-Cloud Region Relationship
745- Create Service
746- Create Service Subscription
747- Create Cloud Tenant
748- Create Availability Zone
749
750**Postman -> Multicloud**
751
752- Upload Connectivity Info **TODO: where to get kubeconfig file?**
753
754
755PART 3 - Execution of the Use Case
756----------------------------------
757
758This part contains all the steps to run the use case by using ONAP GUIs and Postman.
759
760Following picture describes the overall sequential flow of the use case.
761
762.. figure:: files/vFW_CNF_CDS/vFW_CNF_CDS_Flow.png
763 :align: center
764
765 vFW CNF CDS Use Case sequence flow.
766
7673-1 Onboarding
768~~~~~~~~~~~~~~
769
770Creating CSAR
771.............
772
773Whole content of this use case is stored into single git repository and ONAP user content package CSAR package can be created with provided Makefile.
774
775Complete content can be packaged to single CSAR file in following way:
776(Note: requires Helm installed)
777
778::
779
780 git clone https://gerrit.onap.org/r/demo
781 cd heat/vFW_CNF_CDS/templates
782 make
783
784The output looks like:
785::
786
787 mkdir csar/
788 make -C helm
789 make[1]: Entering directory '/home/samuli/onapCode/demo/heat/vFW_CNF_CDS/templates/helm'
790 rm -f base_template-*.tgz
791 rm -f base_template_cloudtech_k8s_charts.tgz
792 helm package base_template
793 Successfully packaged chart and saved it to: /home/samuli/onapCode/demo/heat/vFW_CNF_CDS/templates/helm/base_template-0.2.0.tgz
794 mv base_template-*.tgz base_template_cloudtech_k8s_charts.tgz
795 rm -f vpkg-*.tgz
796 rm -f vpkg_cloudtech_k8s_charts.tgz
797 helm package vpkg
798 Successfully packaged chart and saved it to: /home/samuli/onapCode/demo/heat/vFW_CNF_CDS/templates/helm/vpkg-0.2.0.tgz
799 mv vpkg-*.tgz vpkg_cloudtech_k8s_charts.tgz
800 rm -f vfw-*.tgz
801 rm -f vfw_cloudtech_k8s_charts.tgz
802 helm package vfw
803 Successfully packaged chart and saved it to: /home/samuli/onapCode/demo/heat/vFW_CNF_CDS/templates/helm/vfw-0.2.0.tgz
804 mv vfw-*.tgz vfw_cloudtech_k8s_charts.tgz
805 rm -f vsn-*.tgz
806 rm -f vsn_cloudtech_k8s_charts.tgz
807 helm package vsn
808 Successfully packaged chart and saved it to: /home/samuli/onapCode/demo/heat/vFW_CNF_CDS/templates/helm/vsn-0.2.0.tgz
809 mv vsn-*.tgz vsn_cloudtech_k8s_charts.tgz
810 make[1]: Leaving directory '/home/samuli/onapCode/demo/heat/vFW_CNF_CDS/templates/helm'
811 mv helm/*.tgz csar/
812 cp base/* csar/
813 cd cba/ && zip -r vFW_CDS_CNF.zip .
814 adding: TOSCA-Metadata/ (stored 0%)
815 adding: TOSCA-Metadata/TOSCA.meta (deflated 38%)
816 adding: Templates/ (stored 0%)
817 adding: Templates/base_template-mapping.json (deflated 92%)
818 adding: Templates/vfw-template.vtl (deflated 87%)
819 adding: Templates/nf-params-mapping.json (deflated 86%)
820 adding: Templates/vsn-mapping.json (deflated 94%)
821 adding: Templates/vnf-template.vtl (deflated 90%)
822 adding: Templates/vpkg-mapping.json (deflated 94%)
823 adding: Templates/vsn-template.vtl (deflated 87%)
824 adding: Templates/nf-params-template.vtl (deflated 44%)
825 adding: Templates/base_template-template.vtl (deflated 85%)
826 adding: Templates/vfw-mapping.json (deflated 94%)
827 adding: Templates/vnf-mapping.json (deflated 92%)
828 adding: Templates/vpkg-template.vtl (deflated 86%)
829 adding: Templates/k8s-profiles/ (stored 0%)
830 adding: Templates/k8s-profiles/vfw-cnf-cds-base-profile.tar.gz (stored 0%)
831 adding: Scripts/ (stored 0%)
832 adding: Scripts/kotlin/ (stored 0%)
833 adding: Scripts/kotlin/KotlinK8sProfileUpload.kt (deflated 75%)
834 adding: Scripts/kotlin/README.md (stored 0%)
835 adding: Definitions/ (stored 0%)
836 adding: Definitions/artifact_types.json (deflated 57%)
837 adding: Definitions/vFW_CNF_CDS.json (deflated 81%)
838 adding: Definitions/node_types.json (deflated 86%)
839 adding: Definitions/policy_types.json (stored 0%)
840 adding: Definitions/data_types.json (deflated 93%)
841 adding: Definitions/resources_definition_types.json (deflated 95%)
842 adding: Definitions/relationship_types.json (stored 0%)
843 mv cba/vFW_CDS_CNF.zip csar/
844 #Can't use .csar extension or SDC will panic
845 cd csar/ && zip -r vfw_k8s_demo.zip .
846 adding: base_template_cloudtech_k8s_charts.tgz (stored 0%)
847 adding: MANIFEST.json (deflated 83%)
848 adding: base_template.yaml (deflated 63%)
849 adding: vsn_cloudtech_k8s_charts.tgz (stored 0%)
850 adding: vfw_cloudtech_k8s_charts.tgz (stored 0%)
851 adding: vpkg_cloudtech_k8s_charts.tgz (stored 0%)
852 adding: vsn.yaml (deflated 75%)
853 adding: vpkg.yaml (deflated 76%)
854 adding: vfw.yaml (deflated 77%)
855 adding: vFW_CDS_CNF.zip (stored 0%)
856 adding: base_template.env (deflated 23%)
857 adding: vsn.env (deflated 53%)
858 adding: vpkg.env (deflated 55%)
859 adding: vfw.env (deflated 58%)
860 mv csar/vfw_k8s_demo.zip .
861 $
862
863and package **vfw_k8s_demo.zip** file is created containing all sub-models.
864
865Import this package into SDC and follow onboarding steps.
866
867Service Creation with SDC
868.........................
869
870Create VSP, VLM, VF, ..., Service in SDC
871 - Remember during VSP onboard to choose "Network Package" Onboarding procedure
872
873**TODO: make better steps**
874
875On VF level, add CBA separately as it's not onboarded by default from CSAR correctly
876
877Service -> Properties Assignment -> Choose VF (at right box):
878 - skip_post_instantiation_configuration - True
879 - sdnc_artifact_name - vnf
880 - sdnc_model_name - vFW_CNF_CDS
881 - sdnc_model_version - 1.0.0
882
883Distribution Of Service
884.......................
885
886Distribute service. **TODO: add screenshot to distribution SDC UI**
887
888Verify distribution for:
889
890- SDC:
891
892 SDC Catalog database should have our service now defined.
893
894 **Postman -> SDC/SO -> SDC Catalog Service**
895
896 ::
897
898 {
899 "uuid": "40f4cca8-1025-4f2e-8435-dda898f0caab",
900 "invariantUUID": "b0ecfa3b-4394-4727-be20-c2c718002093",
901 "name": "TestvFWService",
902 "version": "3.0",
903 "toscaModelURL": "/sdc/v1/catalog/services/40f4cca8-1025-4f2e-8435-dda898f0caab/toscaModel",
904 "category": "Mobility",
905 "lifecycleState": "CERTIFIED",
906 "lastUpdaterUserId": "jm0007",
907 "distributionStatus": "DISTRIBUTED"
908 }
909
910 Listing should contain entry with our service name **TestvFWService** **TODO: Let's use service name different from other demos**
911
912- SO:
913
914 SO Catalog database should have our service NFs defined now.
915
916 **Postman -> SDC/SO -> SO Catalog DB Service xNFs**
917
918 ::
919
920 {
921 "serviceVnfs":[
922 {
923 "modelInfo":{
924 "modelName":"FixedVFW",
925 "modelUuid":"a6c43cc8-677d-447d-afc2-795212182dc0",
926 "modelInvariantUuid":"074555e3-21b9-47ba-9ad9-78028029a36d",
927 "modelVersion":"1.0",
928 "modelCustomizationUuid":"366c007e-7684-4a0b-a2f4-9815174bec55",
929 "modelInstanceName":"FixedVFW 0"
930 },
931 "toscaNodeType":"org.openecomp.resource.vf.Fixedvfw",
932 "nfFunction":null,
933 "nfType":null,
934 "nfRole":null,
935 "nfNamingCode":null,
936 "multiStageDesign":"false",
937 "vnfcInstGroupOrder":null,
938 "resourceInput":"{\"vf_module_id\":\"vFirewallCL\",\"skip_post_instantiation_configuration\":\"true\",\"vsn_flavor_name\":\"PUT THE VM FLAVOR NAME HERE (m1.medium suggested)\",\"vfw_int_private2_ip_0\":\"192.168.20.100\",\"int_private1_subnet_id\":\"zdfw1fwl01_unprotected_sub\",\"public_net_id\":\"PUT THE PUBLIC NETWORK ID HERE\",\"vnf_name\":\"vFW_NextGen\",\"onap_private_subnet_id\":\"PUT THE ONAP PRIVATE NETWORK NAME HERE\",\"vsn_int_private2_ip_0\":\"192.168.20.250\",\"sec_group\":\"PUT THE ONAP SECURITY GROUP HERE\",\"vfw_name_0\":\"zdfw1fwl01fwl01\",\"nexus_artifact_repo\":\"https://nexus.onap.org\",\"onap_private_net_cidr\":\"10.0.0.0/16\",\"vpg_onap_private_ip_0\":\"10.0.100.2\",\"dcae_collector_ip\":\"10.0.4.1\",\"vsn_image_name\":\"PUT THE VM IMAGE NAME HERE (UBUNTU 1404)\",\"vnf_id\":\"vSink_demo_app\",\"vpg_flavor_name\":\"PUT THE VM FLAVOR NAME HERE (m1.medium suggested)\",\"dcae_collector_port\":\"30235\",\"vfw_int_private2_floating_ip\":\"192.168.10.200\",\"vpg_name_0\":\"zdfw1fwl01pgn01\",\"int_private2_subnet_id\":\"zdfw1fwl01_protected_sub\",\"int_private2_net_cidr\":\"192.168.20.0/24\",\"nf_naming\":\"true\",\"vsn_name_0\":\"zdfw1fwl01snk01\",\"multi_stage_design\":\"false\",\"vpg_image_name\":\"PUT THE VM IMAGE NAME HERE (UBUNTU 1404)\",\"onap_private_net_id\":\"PUT THE ONAP PRIVATE NETWORK NAME HERE\",\"availability_zone_max_count\":\"1\",\"sdnc_artifact_name\":\"vnf\",\"vsn_onap_private_ip_0\":\"10.0.100.3\",\"vfw_flavor_name\":\"PUT THE VM FLAVOR NAME HERE (m1.medium suggested)\",\"demo_artifacts_version\":\"1.6.0-SNAPSHOT\",\"pub_key\":\"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDQXYJYYi3/OUZXUiCYWdtc7K0m5C0dJKVxPG0eI8EWZrEHYdfYe6WoTSDJCww+1qlBSpA5ac/Ba4Wn9vh+lR1vtUKkyIC/nrYb90ReUd385Glkgzrfh5HdR5y5S2cL/Frh86lAn9r6b3iWTJD8wBwXFyoe1S2nMTOIuG4RPNvfmyCTYVh8XTCCE8HPvh3xv2r4egawG1P4Q4UDwk+hDBXThY2KS8M5/8EMyxHV0ImpLbpYCTBA6KYDIRtqmgS6iKyy8v2D1aSY5mc9J0T5t9S2Gv+VZQNWQDDKNFnxqYaAo1uEoq/i1q63XC5AD3ckXb2VT6dp23BQMdDfbHyUWfJN\",\"key_name\":\"vfw_key\",\"vfw_int_private1_ip_0\":\"192.168.10.100\",\"sdnc_model_version\":\"1.0.0\",\"int_private1_net_cidr\":\"192.168.10.0/24\",\"install_script_version\":\"1.6.0-SNAPSHOT\",\"vfw_image_name\":\"PUT THE VM IMAGE NAME HERE (UBUNTU 1404)\",\"vfw_onap_private_ip_0\":\"10.0.100.1\",\"vpg_int_private1_ip_0\":\"192.168.10.200\",\"int_private2_net_id\":\"zdfw1fwl01_protected\",\"cloud_env\":\"PUT openstack OR rackspace HERE\",\"sdnc_model_name\":\"vFW_CNF_CDS\",\"int_private1_net_id\":\"zdfw1fwl01_unprotected\"}",
939 "vfModules":[
940 {
941 "modelInfo":{
942 "modelName":"Fixedvfw..base_template..module-0",
943 "modelUuid":"8bb9fa50-3e82-4664-bd1c-a29267be726a",
944 "modelInvariantUuid":"750b39d0-7f99-4b7f-9a22-c15c7348221d",
945 "modelVersion":"1",
946 "modelCustomizationUuid":"603eadfe-50d6-413a-853c-46f5a8e2ddc7"
947 },
948 "isBase":true,
949 "vfModuleLabel":"base_template",
950 "initialCount":1,
951 "hasVolumeGroup":false
952 },
953 {
954 "modelInfo":{
955 "modelName":"Fixedvfw..vsn..module-1",
956 "modelUuid":"027696a5-a605-44ea-9362-391a6b217de0",
957 "modelInvariantUuid":"2e3b182d-7ee3-4a8d-9c2b-056188b6eb53",
958 "modelVersion":"1",
959 "modelCustomizationUuid":"f75c3628-12e9-4c70-be98-d347045a3f70"
960 },
961 "isBase":false,
962 "vfModuleLabel":"vsn",
963 "initialCount":0,
964 "hasVolumeGroup":false
965 },
966 {
967 "modelInfo":{
968 "modelName":"Fixedvfw..vpkg..module-2",
969 "modelUuid":"64af8ad0-cb81-42a2-a069-7d246d8bff5d",
970 "modelInvariantUuid":"5c9f3097-26ba-41fb-928b-f7ddc31f6f52",
971 "modelVersion":"1",
972 "modelCustomizationUuid":"32ffad03-d38d-46d5-b4a6-a3b0b6112ffc"
973 },
974 "isBase":false,
975 "vfModuleLabel":"vpkg",
976 "initialCount":0,
977 "hasVolumeGroup":false
978 },
979 {
980 "modelInfo":{
981 "modelName":"Fixedvfw..vfw..module-3",
982 "modelUuid":"55d889e4-ff38-4ed0-a159-60392c968042",
983 "modelInvariantUuid":"5c6a06e9-1168-4b01-bd2a-38d544c6d131",
984 "modelVersion":"1",
985 "modelCustomizationUuid":"f9afd9bb-7796-4aff-8f53-681513115742"
986 },
987 "isBase":false,
988 "vfModuleLabel":"vfw",
989 "initialCount":0,
990 "hasVolumeGroup":false
991 }
992 ],
993 "groups":[
994
995 ]
996 }
997 ]
998 }
999
1000- SDNC:
1001
1002 SDNC should have it's database updated with sdnc_* properties that were set during service modeling.
1003
1004 **TODO: verify below the customization_uuid where it is got**
1005
1006 ::
1007
1008 kubectl -n onap exec onap-mariadb-galera-mariadb-galera-0 -it -- sh
1009 mysql -uroot -psecretpassword -D sdnctl
1010 MariaDB [sdnctl]> select sdnc_model_name, sdnc_model_version, sdnc_artifact_name from VF_MODEL WHERE customization_uuid = '88e0e9a7-5bd2-4689-ae9e-7fc167d685a2';
1011 +-----------------+--------------------+--------------------+
1012 | sdnc_model_name | sdnc_model_version | sdnc_artifact_name |
1013 +-----------------+--------------------+--------------------+
1014 | vFW_CNF_CDS | 1.0.0 | vnf |
1015 +-----------------+--------------------+--------------------+
1016 1 row in set (0.00 sec)
1017
1018 # Where customization_uuid is the modelCustomizationUuid of the VNf (serviceVnfs response in 2nd postman call from SO Catalog DB)
1019
1020- CDS:
1021
1022 CDS should onboard CBA uploaded as part of VF.
1023
1024 **Postman -> CDS -> CDS Blueprint List CBAs**
1025
1026 ::
1027
1028 {
1029 "blueprintModel": {
1030 "id": "761bbe69-8357-454b-9f37-46d9da8ecad6",
1031 "artifactUUId": null,
1032 "artifactType": "SDNC_MODEL",
1033 "artifactVersion": "1.0.0",
1034 "artifactDescription": "Controller Blueprint for vFW_CNF_CDS:1.0.0",
1035 "internalVersion": null,
1036 "createdDate": "2020-02-21T12:57:43.000Z",
1037 "artifactName": "vFW_CNF_CDS",
1038 "published": "Y",
1039 "updatedBy": "Samuli Silvius <s.silvius@partner.samsung.com>",
1040 "tags": "Samuli Silvius, vFW_CNF_CDS"
1041 }
1042 }
1043
1044 The list should have the matching entries with SDNC database:
1045
1046 - sdnc_model_name == artifactName
1047 - sdnc_model_version == artifactVersion
1048
1049- K8splugin:
1050
1051 K8splugin should onboard 4 resource bundles related to helm resources:
1052
1053 **Postman -> Multicloud -> List Resource Bundle Definitions**
1054
1055 ::
1056
1057 [
1058 {
1059 "rb-name": "750b39d0-7f99-4b7f-9a22-c15c7348221d",
1060 "rb-version": "8bb9fa50-3e82-4664-bd1c-a29267be726a",
1061 "chart-name": "base_template",
1062 "description": "",
1063 "labels": {
1064 "vnf_customization_uuid": "603eadfe-50d6-413a-853c-46f5a8e2ddc7"
1065 }
1066 },
1067 {
1068 "rb-name": "2e3b182d-7ee3-4a8d-9c2b-056188b6eb53",
1069 "rb-version": "027696a5-a605-44ea-9362-391a6b217de0",
1070 "chart-name": "vsn",
1071 "description": "",
1072 "labels": {
1073 "vnf_customization_uuid": "f75c3628-12e9-4c70-be98-d347045a3f70"
1074 }
1075 },
1076 {
1077 "rb-name": "5c9f3097-26ba-41fb-928b-f7ddc31f6f52",
1078 "rb-version": "64af8ad0-cb81-42a2-a069-7d246d8bff5d",
1079 "chart-name": "vpkg",
1080 "description": "",
1081 "labels": {
1082 "vnf_customization_uuid": "32ffad03-d38d-46d5-b4a6-a3b0b6112ffc"
1083 }
1084 },
1085 {
1086 "rb-name": "5c6a06e9-1168-4b01-bd2a-38d544c6d131",
1087 "rb-version": "55d889e4-ff38-4ed0-a159-60392c968042",
1088 "chart-name": "vfw",
1089 "description": "",
1090 "labels": {
1091 "vnf_customization_uuid": "f9afd9bb-7796-4aff-8f53-681513115742"
1092 }
1093 }
1094 ]
1095
10963-2 CNF Instantiation
1097~~~~~~~~~~~~~~~~~~~~~
1098
1099This is the whole beef of the use case and furthermore the core of it is that we can instantiate any amount of instances of the same CNF each running and working completely of their own. Very basic functionality in VM (VNF) side but for Kubernetes and ONAP integration this is the first milestone towards other normal use cases familiar for VNFs.
1100
1101Use again Postman to trigger instantion from SO interface. Postman collection is automated to populate needed parameters when queries are run in correct order. If you did not already run following 2 queries after distribution (to verify distribution), run those now:
1102
1103- **Postman -> SDC/SO -> SDC Catalog Service**
1104- **Postman -> SDC/SO -> SO Catalog DB Service xNFs**
1105
1106Now actual instantiation can be triggered with:
1107
1108**Postman -> SDC/SO -> SO Self-Serve Service Assign & Activate**
1109
1110Follow progress with SO's GET request:
1111
1112**Postman -> SDC/SO -> SO Infra Active Requests**
1113
1114The successful reply payload in that query should start like this:
1115
1116::
1117
1118 {
1119 "clientRequestId": null,
1120 "action": "createInstance",
1121 "requestStatus": "COMPLETED",
1122 "statusMessage": "Failed to create self-serve assignment for vf-module with vf-module-id=b70112fd-f6b2-44fe-a55c-6928d61843bf with error: Encountered error from self-serve-generate-name with error: Error from NameGenerationNode Assign",
1123 "rollbackStatusMessage": null,
1124 "flowStatus": "Execution of UnassignVfModuleBB has completed successfully, next invoking UnassignVfModuleBB (Execution Path progress: BBs completed = 1; BBs remaining = 4).",
1125 "retryStatusMessage": null,
1126 ...
1127
1128**TODO: fix COMPLETED payload**
1129
1130Progress can be followed also with `SO Monitoring`_.
1131
1132Second instance Instantion
1133..........................
1134
1135To finally verify that all the work done within this demo, it should be possible to instantiate second vFW instance successfully.
1136
1137Trigger again:
1138
1139**Postman -> SDC/SO -> SO Self-Serve Service Assign & Activate**
1140
1141**TODO: update to seconf call in postman**
1142
11433-3 Results and Logs
1144~~~~~~~~~~~~~~~~~~~~
1145
1146Now Kubernetes version of vFW multiple instances are running in target VIM (KUD deployment).
1147
1148.. figure:: files/vFW_CNF_CDS/vFW_Instance_In_Kubernetes.png
1149 :align: center
1150
1151 vFW Instance In Kubernetes
1152
1153To review situation after instantiation from different ONAP components, most of the info can be found using Postman queries provided. For each query, example response payload(s) is/are saved and can be found from top right corner of the Postman window.
1154
1155Execute following Postman queries and check example section to see the valid results.
1156
1157======================== =================
1158Verify Target Postman query
1159------------------------ -----------------
1160Service Instances in AAI **Postman -> AAI -> List Service Instances**
1161Generic VNFs in AAI **Postman -> AAI -> List VNF Instances**
1162K8S Instances in KUD **Postman -> Multicloud -> List Instances**
1163======================== =================
1164
1165Query also directly from VIM:
1166
1167**TODO: label filters needed here. Namespace?**
1168
1169::
1170
1171 #
1172 ubuntu@kud-host:~$ kubectl get pods,svc,networks,cm,network-attachment-definition,deployments
1173 NAME READY STATUS RESTARTS AGE
1174 pod/vfw-17f6f7d3-8424-4550-a188-cd777f0ab48f-7cfb9949d9-8b5vg 0/1 Pending 0 22s
1175 pod/vfw-19571429-4af4-49b3-af65-2eb1f97bba43-75cd7c6f76-4gqtz 1/1 Running 0 11m
1176 pod/vpg-5ea0d3b0-9a0c-4e88-a2e2-ceb84810259e-f4485d485-pln8m 1/1 Running 0 11m
1177 pod/vpg-8581bc79-8eef-487e-8ed1-a18c0d638b26-6f8cff54d-dvw4j 1/1 Running 0 32s
1178 pod/vsn-8e7ac4fc-2c31-4cf8-90c8-5074c5891c14-5879c56fd-q59l7 2/2 Running 0 11m
1179 pod/vsn-fdc9b4ba-c0e9-4efc-8009-f9414ae7dd7b-5889b7455-96j9d 2/2 Running 0 30s
1180
1181 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
1182 service/kubernetes ClusterIP 10.244.0.1 <none> 443/TCP 48d
1183 service/vpg-5ea0d3b0-9a0c-4e88-a2e2-ceb84810259e-management-api NodePort 10.244.43.245 <none> 2831:30831/TCP 11m
1184 service/vpg-8581bc79-8eef-487e-8ed1-a18c0d638b26-management-api NodePort 10.244.1.45 <none> 2831:31831/TCP 33s
1185 service/vsn-8e7ac4fc-2c31-4cf8-90c8-5074c5891c14-darkstat-ui NodePort 10.244.16.187 <none> 667:30667/TCP 11m
1186 service/vsn-fdc9b4ba-c0e9-4efc-8009-f9414ae7dd7b-darkstat-ui NodePort 10.244.20.229 <none> 667:31667/TCP 30s
1187
1188 NAME AGE
1189 network.k8s.plugin.opnfv.org/55118b80-8470-4c99-bfdf-d122cd412739-management-network 40s
1190 network.k8s.plugin.opnfv.org/55118b80-8470-4c99-bfdf-d122cd412739-protected-network 40s
1191 network.k8s.plugin.opnfv.org/55118b80-8470-4c99-bfdf-d122cd412739-unprotected-network 40s
1192 network.k8s.plugin.opnfv.org/567cecc3-9692-449e-877a-ff0b560736be-management-network 11m
1193 network.k8s.plugin.opnfv.org/567cecc3-9692-449e-877a-ff0b560736be-protected-network 11m
1194 network.k8s.plugin.opnfv.org/567cecc3-9692-449e-877a-ff0b560736be-unprotected-network 11m
1195
1196 NAME DATA AGE
1197 configmap/vfw-17f6f7d3-8424-4550-a188-cd777f0ab48f-configmap 6 22s
1198 configmap/vfw-19571429-4af4-49b3-af65-2eb1f97bba43-configmap 6 11m
1199 configmap/vpg-5ea0d3b0-9a0c-4e88-a2e2-ceb84810259e-configmap 6 11m
1200 configmap/vpg-8581bc79-8eef-487e-8ed1-a18c0d638b26-configmap 6 33s
1201 configmap/vsn-8e7ac4fc-2c31-4cf8-90c8-5074c5891c14-configmap 2 11m
1202 configmap/vsn-fdc9b4ba-c0e9-4efc-8009-f9414ae7dd7b-configmap 2 30s
1203
1204 NAME AGE
1205 networkattachmentdefinition.k8s.cni.cncf.io/55118b80-8470-4c99-bfdf-d122cd412739-ovn-nat 40s
1206 networkattachmentdefinition.k8s.cni.cncf.io/567cecc3-9692-449e-877a-ff0b560736be-ovn-nat 11m
1207
1208 NAME READY UP-TO-DATE AVAILABLE AGE
1209 deployment.extensions/vfw-17f6f7d3-8424-4550-a188-cd777f0ab48f 0/1 1 0 22s
1210 deployment.extensions/vfw-19571429-4af4-49b3-af65-2eb1f97bba43 1/1 1 1 11m
1211 deployment.extensions/vpg-5ea0d3b0-9a0c-4e88-a2e2-ceb84810259e 1/1 1 1 11m
1212 deployment.extensions/vpg-8581bc79-8eef-487e-8ed1-a18c0d638b26 1/1 1 1 33s
1213 deployment.extensions/vsn-8e7ac4fc-2c31-4cf8-90c8-5074c5891c14 1/1 1 1 11m
1214 deployment.extensions/vsn-fdc9b4ba-c0e9-4efc-8009-f9414ae7dd7b 1/1 1 1 30s
1215
1216
1217Component Logs From The Execution
1218.................................
1219
1220All logs from the use case execution are here:
1221
1222 :download: `logs.zip`_
1223
1224- `so-bpmn-infra_so-bpmn-infra_debug.log`
1225- SO openstack adapter
1226- `sdnc_sdnc_karaf.log`
1227
1228 From karaf.log all requests (payloads) to CDS can be found by searching following string:
1229
1230 ``'Sending request below to url http://cds-blueprints-processor-http:8080/api/v1/execution-service/process'``
1231
1232- `cds-blueprints-processor_cds-blueprints-processor_POD_LOG.log`
1233- `multicloud-k8s_multicloud-k8s_POD_LOG.log`
1234- network naming
1235
1236Debug log
1237+++++++++
1238
1239In case more detailed logging is needed, here's instructions how to setup DEBUG logging for few components.
1240
1241- SDNC
1242
1243 ::
1244
1245 kubectl -n onap exec -it onap-sdnc-sdnc-0 -c sdnc /opt/opendaylight/bin/client log:set DEBUG
1246
1247
1248- CDS Blueprint Processor
1249
1250 ::
1251
1252 # Edit configmap
1253 kubectl -n onap edit configmap onap-cds-cds-blueprints-processor-configmap
1254
1255 # Edit logback.xml content change root logger level from info to debug.
1256 <root level="debug">
1257 <appender-ref ref="STDOUT"/>
1258 </root>
1259
1260 # Delete the POd to make changes effective
1261 kubectl -n onap delete pod $(kubectl -n onap get pod -l app=cds-blueprints-processor --no-headers | cut -d" " -f1)
1262
1263PART 4 - Summary and Future improvements needed
1264-----------------------------------------------
1265
1266This use case made CNFs onboarding and instantiation a little bit easier and closer to "normal" VNF way. Also CDS resource resolution capabilities were taken into use (compared to earlier demos) together with SO's MACRO workflow.
1267
1268CNF application in vFW (Helm charts) were divided to multiple Helm charts comply with vf-module structure of a Heat based VNF.
1269
1270Future development areas for this use case and in general for CNF support could be:
1271
1272- Automate manual initialization steps in to Robot init. Now all was done with Postman or manual step on command line.
1273- Automate use case in ONAP daily CI
1274- Include Closed Loop part of the vFW demo.
1275- Use multicloud/k8S API v2. Also consider profile concept future.
1276- Sync CDS model with `vFW_CNF_CDS Model`_ use case i.e. try to keep only single model regardless of xNF being Openstack or Kubernetes based.
1277- TOSCA based service and xNF models instead of dummy Heat wrapper. Won't work directly with current vf-module oriented SO workflows.
1278- vFW service with Openstack VNF and Kubernetes CNF
1279
1280
1281Multiple lower level bugs/issues were also found during use case development
1282
1283- Distribution of Helm package directly from CSAR package `SDC-2776`_
1284
1285
1286.. _ONAP Deployment Guide: https://docs.onap.org/en/frankfurt/submodules/oom.git/docs/oom_quickstart_guide.html#quick-start-label
1287.. _vFW_CNF_CDS Model: https://git.onap.org/demo/tree/heat/vFW_CNF_CDS?h=frankfurt
1288.. _vFW CDS Dublin: https://wiki.onap.org/display/DW/vFW+CDS+Dublin
1289.. _vFW CBA Model: https://git.onap.org/ccsdk/cds/tree/components/model-catalog/blueprint-model/service-blueprint/vFW?h=frankfurt
1290.. _vFW_Helm Model: https://git.onap.org/multicloud/k8s/tree/kud/demo/firewall?h=elalto
1291.. _vFW_NextGen: https://git.onap.org/demo/tree/heat/vFW_NextGen?h=elalto
1292.. _vFW EDGEX K8S: https://onap.readthedocs.io/en/elalto/submodules/integration.git/docs/docs_vfw_edgex_k8s.html
1293.. _vFW EDGEX K8S In ONAP Wiki: https://wiki.onap.org/display/DW/Deploying+vFw+and+EdgeXFoundry+Services+on+Kubernets+Cluster+with+ONAP
1294.. _KUD readthedocs: https://docs.onap.org/en/frankfurt/submodules/multicloud/k8s.git/docs
1295.. _KUD in Wiki: https://wiki.onap.org/display/DW/Kubernetes+Baremetal+deployment+setup+instructions
1296.. _Multicloud k8s gerrit: https://gerrit.onap.org/r/#/q/status:open+project:+multicloud/k8s
1297.. _KUD subproject in github: https://github.com/onap/multicloud-k8s/tree/master/kud
1298.. _KUD Jenkins ci/cd verification: https://jenkins.onap.org/job/multicloud-k8s-master-kud-deployment-verify-shell/
1299.. _SO Cloud Region Selection: https://git.onap.org/so/tree/adapters/mso-openstack-adapters/src/main/java/org/onap/so/adapters/vnf/MsoVnfPluginAdapterImpl.java?h=elalto#n1149
1300.. _SO Monitoring: http://so-monitoring:30224
1301.. _Jira Epic: https://jira.onap.org/browse/INT-1184
1302.. _Data Dictionary: https://git.onap.org/demo/tree/heat/vFW_CNF_CDS/templates/cba-dd.json?h=frankfurt
1303.. _Helm Healer: https://git.onap.org/oom/offline-installer/tree/tools/helm-healer.sh
1304.. _CDS UAT Testing: https://wiki.onap.org/display/DW/Modeling+Concepts#Concepts-2603186
1305.. _postman.zip: files/vFW_CNF_CDS/postman.zip
1306.. _logs.zip: files/vFW_CNF_CDS/logs.zip
1307.. _SDC-2776: https://jira.onap.org/browse/SDC-2776
1308.. _MULTICLOUD-941: https://jira.onap.org/browse/MULTICLOUD-941
1309.. _CCSDK-2155: https://jira.onap.org/browse/CCSDK-2155