JakobKrieg | 0f29c1c | 2020-10-01 14:00:40 +0200 | [diff] [blame^] | 1 | .. This work is a derivative of https://wiki.onap.org/display/DW/PNF+Simulator+Day-N+config-assign+and+config-deploy+use+case |
| 2 | .. This work is licensed under a Creative Commons Attribution 4.0 |
| 3 | .. International License. http://creativecommons.org/licenses/by/4.0 |
| 4 | .. Copyright (C) 2020 Deutsche Telekom AG. |
| 5 | |
| 6 | PNF Simulator Day-N config-assign/deploy |
| 7 | ======================================== |
| 8 | |
| 9 | |
| 10 | |
| 11 | Overview |
| 12 | ~~~~~~~~~~ |
| 13 | |
| 14 | This use case shows in a very simple way how a blueprint model of a PNF is created in CDS and how the day-n configuration is |
| 15 | assigned and deployed through CDS. A Netconf server (docker image `sysrepo/sysrepo-netopeer2`) is used for simulating the PNF. |
| 16 | |
| 17 | This use case (POC) solely requires a running CDS and the PNF Simulator running on a VM (Ubuntu is used by the author). |
| 18 | No other module of ONAP is needed. |
| 19 | |
| 20 | There are different ways to run CDS, to run PNF simulator and to do configuration deployment. This guide will show |
| 21 | different possible options to allow the greatest possible flexibility. |
| 22 | |
| 23 | Run CDS (Blueprint Processor) |
| 24 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 25 | |
| 26 | CDS can be run in Kubernetes (Minikube, Microk8s) or in an IDE. You can choose your favorite option. |
| 27 | Just the blueprint processor of CDS is needed. If you have desktop access it is recommended to run CDS in an IDE since |
| 28 | it is easy and enables debugging. |
| 29 | |
| 30 | * CDS in Microk8s: https://wiki.onap.org/display/DW/Running+CDS+on+Microk8s (RDT link to be added) |
| 31 | * CDS in Minikube: https://wiki.onap.org/display/DW/Running+CDS+in+minikube (RDT link to be added) |
| 32 | * CDS in an IDE: https://docs.onap.org/projects/onap-ccsdk-cds/en/latest/userguide/running-bp-processor-in-ide.html |
| 33 | |
| 34 | After CDS is running remember the port of blueprint processor, you will need it later on. |
| 35 | |
| 36 | Run PNF Simulator and install module |
| 37 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 38 | |
| 39 | There are many different ways to run a Netconf Server to simulate the PNF, in this guide `sysrepo/sysrepo-netopeer2` |
| 40 | docker image is commonly used. The easiest way is to run the out-of-the-box docker container without any |
| 41 | other configuration, modules or scripts. In the ONAP community there are other workflows existing for running the |
| 42 | PNF Simulator. These workflows are also using `sysrepo/sysrepo-netopeer2` docker image. These workflow are also linked |
| 43 | here but they are not tested by the author of this guide. |
| 44 | |
| 45 | .. tabs:: |
| 46 | |
| 47 | .. tab:: sysrepo/sysrepo-netopeer2 (latest) |
| 48 | |
| 49 | .. warning:: |
| 50 | Currently there is an issue for the SSH connection between CDS and the netconf server because of unmatching |
| 51 | exchange key algorhithms. Use legacy version until the issue is resolved. |
| 52 | |
| 53 | Download and run docker container with ``docker run -d --name netopeer2 -p 830:830 -p 6513:6513 sysrepo/sysrepo-netopeer2:latest`` |
| 54 | |
| 55 | Enter the container with ``docker exec -it netopeer2 bin/bash`` |
| 56 | |
| 57 | Browse to the target location where all YANG modules exist: ``cd /etc/sysrepo/yang`` |
| 58 | |
| 59 | Create a simple mock YANG model for a packet generator (pg.yang). |
| 60 | |
| 61 | .. code-block:: sh |
| 62 | :caption: **pg.yang** |
| 63 | |
| 64 | module sample-plugin { |
| 65 | |
| 66 | yang-version 1; |
| 67 | namespace "urn:opendaylight:params:xml:ns:yang:sample-plugin"; |
| 68 | prefix "sample-plugin"; |
| 69 | |
| 70 | description |
| 71 | "This YANG module defines the generic configuration and |
| 72 | operational data for sample-plugin in VPP"; |
| 73 | |
| 74 | revision "2016-09-18" { |
| 75 | description "Initial revision of sample-plugin model"; |
| 76 | } |
| 77 | |
| 78 | container sample-plugin { |
| 79 | |
| 80 | uses sample-plugin-params; |
| 81 | description "Configuration data of sample-plugin in Honeycomb"; |
| 82 | |
| 83 | // READ |
| 84 | // curl -u admin:admin http://localhost:8181/restconf/config/sample-plugin:sample-plugin |
| 85 | |
| 86 | // WRITE |
| 87 | // curl http://localhost:8181/restconf/operational/sample-plugin:sample-plugin |
| 88 | |
| 89 | } |
| 90 | |
| 91 | grouping sample-plugin-params { |
| 92 | container pg-streams { |
| 93 | list pg-stream { |
| 94 | |
| 95 | key id; |
| 96 | leaf id { |
| 97 | type string; |
| 98 | } |
| 99 | |
| 100 | leaf is-enabled { |
| 101 | type boolean; |
| 102 | } |
| 103 | } |
| 104 | } |
| 105 | } |
| 106 | } |
| 107 | |
| 108 | Create the following sample XML data definition for the above model (pg-data.xml). |
| 109 | Later on this will initialise one single PG stream. |
| 110 | |
| 111 | .. code-block:: sh |
| 112 | :caption: **pg-data.xml** |
| 113 | |
| 114 | <sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin"> |
| 115 | <pg-streams> |
| 116 | <pg-stream> |
| 117 | <id>1</id> |
| 118 | <is-enabled>true</is-enabled> |
| 119 | </pg-stream> |
| 120 | </pg-streams> |
| 121 | </sample-plugin> |
| 122 | |
| 123 | Execute the following command within netopeer docker container to install the pg.yang model |
| 124 | |
| 125 | .. code-block:: sh |
| 126 | |
| 127 | sysrepoctl -v3 -i pg.yang |
| 128 | |
| 129 | .. note:: |
| 130 | This command will just schedule the installation, it will be applied once the server is restarted. |
| 131 | |
| 132 | Stop the container from outside with ``docker stop netopeer2`` and start it again with ``docker start netopeer2`` |
| 133 | |
| 134 | Enter the container like it's mentioned above with ``docker exec -it netopeer2 bin/bash``. |
| 135 | |
| 136 | You can check all installed modules with ``sysrepoctl -l``. `sample-plugin` module should appear with ``I`` flag. |
| 137 | |
| 138 | Execute the following the commands to initialise the Yang model with one pg-stream record. |
| 139 | We will be using CDS to perform the day-1 configuration and day-2 configuration changes. |
| 140 | |
| 141 | .. code-block:: sh |
| 142 | |
| 143 | netopeer2-cli |
| 144 | > connect --host localhost --login root |
| 145 | # passwort is root |
| 146 | > get --filter-xpath /sample-plugin:* |
| 147 | # shows existing pg-stream records (empty) |
| 148 | > edit-config --target running --config=/etc/sysrepo/yang/pg-data.xml |
| 149 | # initialises Yang model with one pg-stream record |
| 150 | > get --filter-xpath /sample-plugin:* |
| 151 | # shows initialised pg-stream |
| 152 | |
| 153 | If the output of the last command is like this, everything went successful: |
| 154 | |
| 155 | .. code-block:: sh |
| 156 | |
| 157 | DATA |
| 158 | <sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin"> |
| 159 | <pg-streams> |
| 160 | <pg-stream> |
| 161 | <id>1</id> |
| 162 | <is-enabled>true</is-enabled> |
| 163 | </pg-stream> |
| 164 | </pg-streams> |
| 165 | </sample-plugin> |
| 166 | |
| 167 | |
| 168 | .. tab:: sysrepo/sysrepo-netopeer2 (legacy) |
| 169 | |
| 170 | Download and run docker container with ``docker run -d --name netopeer2 -p 830:830 -p 6513:6513 sysrepo/sysrepo-netopeer2:legacy`` |
| 171 | |
| 172 | Enter the container with ``docker exec -it netopeer2 bin/bash`` |
| 173 | |
| 174 | Browse to the target location where all YANG modules exist: ``cd /opt/dev/sysrepo/yang`` |
| 175 | |
| 176 | Create a simple mock YANG model for a packet generator (pg.yang). |
| 177 | |
| 178 | .. code-block:: sh |
| 179 | :caption: **pg.yang** |
| 180 | |
| 181 | module sample-plugin { |
| 182 | |
| 183 | yang-version 1; |
| 184 | namespace "urn:opendaylight:params:xml:ns:yang:sample-plugin"; |
| 185 | prefix "sample-plugin"; |
| 186 | |
| 187 | description |
| 188 | "This YANG module defines the generic configuration and |
| 189 | operational data for sample-plugin in VPP"; |
| 190 | |
| 191 | revision "2016-09-18" { |
| 192 | description "Initial revision of sample-plugin model"; |
| 193 | } |
| 194 | |
| 195 | container sample-plugin { |
| 196 | |
| 197 | uses sample-plugin-params; |
| 198 | description "Configuration data of sample-plugin in Honeycomb"; |
| 199 | |
| 200 | // READ |
| 201 | // curl -u admin:admin http://localhost:8181/restconf/config/sample-plugin:sample-plugin |
| 202 | |
| 203 | // WRITE |
| 204 | // curl http://localhost:8181/restconf/operational/sample-plugin:sample-plugin |
| 205 | |
| 206 | } |
| 207 | |
| 208 | grouping sample-plugin-params { |
| 209 | container pg-streams { |
| 210 | list pg-stream { |
| 211 | |
| 212 | key id; |
| 213 | leaf id { |
| 214 | type string; |
| 215 | } |
| 216 | |
| 217 | leaf is-enabled { |
| 218 | type boolean; |
| 219 | } |
| 220 | } |
| 221 | } |
| 222 | } |
| 223 | } |
| 224 | |
| 225 | Create the following sample XML data definition for the above model (pg-data.xml). |
| 226 | Later on this will initialise one single PG (packet-generator) stream. |
| 227 | |
| 228 | .. code-block:: sh |
| 229 | :caption: **pg-data.xml** |
| 230 | |
| 231 | <sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin"> |
| 232 | <pg-streams> |
| 233 | <pg-stream> |
| 234 | <id>1</id> |
| 235 | <is-enabled>true</is-enabled> |
| 236 | </pg-stream> |
| 237 | </pg-streams> |
| 238 | </sample-plugin> |
| 239 | |
| 240 | Execute the following command within netopeer docker container to install the pg.yang model |
| 241 | |
| 242 | .. code-block:: sh |
| 243 | |
| 244 | sysrepoctl -i -g pg.yang |
| 245 | |
| 246 | You can check all installed modules with ``sysrepoctl -l``. `sample-plugin` module should appear with ``I`` flag. |
| 247 | |
| 248 | In legacy version of `sysrepo/sysrepo-netopeer2` subscribers of a module are required, otherwise they are not |
| 249 | running and configurations changes are not accepted, see https://github.com/sysrepo/sysrepo/issues/1395. There is |
| 250 | an predefined application mock up which can be used for that. The usage is described |
| 251 | `https://github.com/sysrepo/sysrepo/issues/1395 <https://asciinema.org/a/160247>`_. You need to run the following |
| 252 | commands to start the example application for subscribing to sample-plugin Yang module. |
| 253 | |
| 254 | .. code-block:: sh |
| 255 | |
| 256 | cd /opt/dev/sysrepo/build/examples |
| 257 | ./application_example sample-plugin |
| 258 | |
| 259 | Following output should appear: |
| 260 | |
| 261 | .. code-block:: sh |
| 262 | |
| 263 | ========== STARTUP CONFIG sample-plugin APPLIED AS RUNNING ========== |
| 264 | |
| 265 | ========== CONFIG HAS CHANGED, CURRENT RUNNING CONFIG sample-plugin: ========== |
| 266 | |
| 267 | /sample-plugin:sample-plugin (container) |
| 268 | /sample-plugin:sample-plugin/pg-streams (container) |
| 269 | /sample-plugin:sample-plugin/pg-streams/pg-stream[id='1'] (list instance) |
| 270 | /sample-plugin:sample-plugin/pg-streams/pg-stream[id='1']/id = 1 |
| 271 | /sample-plugin:sample-plugin/pg-streams/pg-stream[id='1']/is-enabled = true |
| 272 | |
| 273 | The terminal session needs to be kept open after application has started. |
| 274 | |
| 275 | Open a new terminal and enter the container with ``docker exec -it netopeer2 bin/bash``. |
| 276 | Execute the following commands in the container to initialise the Yang model with one pg-stream record. |
| 277 | We will be using CDS to perform the day-1 configuration and day-2 configuration changes. |
| 278 | |
| 279 | .. code-block:: sh |
| 280 | |
| 281 | netopeer2-cli |
| 282 | > connect --host localhost --login netconf |
| 283 | # passwort is netconf |
| 284 | > get --filter-xpath /sample-plugin:* |
| 285 | # shows existing pg-stream records (empty) |
| 286 | > edit-config --target running --config=/opt/dev/sysrepo/yang/pg-data.xml |
| 287 | # initialises Yang model with one pg-stream record |
| 288 | > get --filter-xpath /sample-plugin:* |
| 289 | # shows initialised pg-stream |
| 290 | |
| 291 | If the output of the last command is like this, everything went successful: |
| 292 | |
| 293 | .. code-block:: sh |
| 294 | |
| 295 | DATA |
| 296 | <sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin"> |
| 297 | <pg-streams> |
| 298 | <pg-stream> |
| 299 | <id>1</id> |
| 300 | <is-enabled>true</is-enabled> |
| 301 | </pg-stream> |
| 302 | </pg-streams> |
| 303 | </sample-plugin> |
| 304 | |
| 305 | .. tab:: PNF simulator integration project |
| 306 | |
| 307 | .. warning:: |
| 308 | This method of setting up the PNF simulator is not tested by the author of this guide |
| 309 | |
| 310 | You can refer to `PnP PNF Simulator wiki page <https://wiki.onap.org/display/DW/PnP+PNF+Simulator>`_ |
| 311 | to clone the GIT repo and start the required docker containers. We are interested in the |
| 312 | `sysrepo/sysrepo-netopeer2` docker container to load a simple YANG similar to vFW Packet Generator. |
| 313 | |
| 314 | Start PNF simulator docker containers. You can consider changing the netopeer image verion to image: |
| 315 | `sysrepo/sysrepo-netopeer2:iop` in docker-compose.yml file If you find any issues with the default image. |
| 316 | |
| 317 | .. code-block:: sh |
| 318 | |
| 319 | cd $HOME |
| 320 | |
| 321 | git clone https://github.com/onap/integration.git |
| 322 | |
| 323 | Start PNF simulator |
| 324 | |
| 325 | cd ~/integration/test/mocks/pnfsimulator |
| 326 | |
| 327 | ./simulator.sh start |
| 328 | |
| 329 | Verify that you have netopeer docker container are up and running. It will be mapped to host port 830. |
| 330 | |
| 331 | .. code-block:: sh |
| 332 | |
| 333 | docker ps -a | grep netopeer |
| 334 | |
| 335 | |
| 336 | Config-assign and config-deploy in CDS |
| 337 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 338 | |
| 339 | In the following steps the CBA is published in CDS, config-assignment is done and the config is deployed to to the |
| 340 | Netconf server through CDS in the last step. We will use this CBA: :download:`zip <pnf-simulator-demo-cba.zip>`. |
| 341 | If you want to use scripts instead of Postman the CBA also contains all necessary scripts. |
| 342 | |
| 343 | .. tabs:: |
| 344 | |
| 345 | .. tab:: Scripts |
| 346 | |
| 347 | **There will be different scripts depending on your CDS installation. For running it in an IDE always use scripts with** |
| 348 | **-ide.sh prefix. For running in kubernetes use the scripts with -k8s.sh ending. For IDE scripts host will be localhost** |
| 349 | **and port will be 8081. For K8s host ip adress gets automatically detected, port is 8000.** |
| 350 | |
| 351 | **Set up CDS:** |
| 352 | |
| 353 | Unzip the downloaded CBA and go to ``/Scripts/`` directory. |
| 354 | |
| 355 | The below script will call Bootstrap API of CDS which loads the CDS default model artifacts into CDS DB. |
| 356 | You should get HTTP status 200 for the below command. |
| 357 | |
| 358 | .. code-block:: sh |
| 359 | |
| 360 | bash -x ./bootstrap-cds-ide.sh |
| 361 | # bash -x ./bootstrap-cds-k8s.sh |
| 362 | |
| 363 | Call ``bash -x ./get-cds-blueprint-models-ide.sh`` / ``bash -x ./get-cds-blueprint-models-k8s.sh`` to get all blueprint models in the CDS database. |
| 364 | You will see a default model "artifactName": "vFW-CDS" which was loaded by calling bootstrap. |
| 365 | |
| 366 | Push the PNF CDS blueprint model data dictionary to CDS by calling ``bash -x ./dd-microk8s-ide.sh ./dd.json`` / |
| 367 | ``bash -x ./dd-microk8s-k8s.sh ./dd.json``. |
| 368 | This will call the data dictionary endpoint of CDS. |
| 369 | |
| 370 | Check CDS database for PNF data dictionaries by entering the DB. You should see 6 rows as shown below. |
| 371 | |
| 372 | For IDE: |
| 373 | |
| 374 | .. code-block:: sh |
| 375 | |
| 376 | sudo docker exec -it mariadb_container_id mysql -uroot -psdnctl |
| 377 | > USE sdnctl; |
| 378 | > select name, data_type from RESOURCE_DICTIONARY where updated_by='Aarna service <vmuthukrishnan@aarnanetworks.com>'; |
| 379 | |
| 380 | +---------------------+-----------+ |
| 381 | | name | data_type | |
| 382 | +---------------------+-----------+ |
| 383 | | netconf-password | string | |
| 384 | | netconf-server-port | string | |
| 385 | | netconf-username | string | |
| 386 | | pnf-id | string | |
| 387 | | pnf-ipv4-address | string | |
| 388 | | stream-count | integer | |
| 389 | +---------------------+-----------+ |
| 390 | |
| 391 | For K8s: |
| 392 | |
| 393 | .. code-block:: sh |
| 394 | |
| 395 | ./connect-cds-mariadb-k8s.sh |
| 396 | |
| 397 | select name, data_type from RESOURCE_DICTIONARY where updated_by='Aarna service <vmuthukrishnan@aarnanetworks.com>'; |
| 398 | |
| 399 | +---------------------+-----------+ |
| 400 | | name | data_type | |
| 401 | +---------------------+-----------+ |
| 402 | | netconf-password | string | |
| 403 | | netconf-server-port | string | |
| 404 | | netconf-username | string | |
| 405 | | pnf-id | string | |
| 406 | | pnf-ipv4-address | string | |
| 407 | | stream-count | integer | |
| 408 | +---------------------+-----------+ |
| 409 | |
| 410 | quit |
| 411 | |
| 412 | exit |
| 413 | |
| 414 | **Enrichment:** |
| 415 | |
| 416 | Move to the main folder of the CBA with ``cd ..`` and archive all folders with ``zip -r pnf-demo.zip *``. |
| 417 | |
| 418 | .. warning:: |
| 419 | The provided CBA is already enriched, the following steps anyhow will enrich the CBA again to show the full workflow. |
| 420 | For Frankfurt release this causes an issue when the configuration is deployed later on. This happens because some parameters |
| 421 | get deleted when enrichment is done a second time. Skip the next steps until Deploy/Save Blueprint if you use |
| 422 | Frankfurt release and use the CBA as it is. In future this step should fixed and executed based on an unenriched CBA. |
| 423 | |
| 424 | Enrich the blueprint through calling the following script. Take care to provide the zip file you downloader earlier. |
| 425 | |
| 426 | .. code-block:: sh |
| 427 | |
| 428 | cd Scripts |
| 429 | bash -x ./enrich-and-download-cds-blueprint-ide.sh ../pnf-demo.zip |
| 430 | # bash -x ./enrich-and-download-cds-blueprint-k8s.sh ../pnf-demo.zip |
| 431 | |
| 432 | Go to the enriched CBA folder with ``cd /tmp/CBA/`` and unzip with ``unzip pnf-demo.zip``. |
| 433 | |
| 434 | **Deploy/Save the Blueprint into CDS database** |
| 435 | |
| 436 | Go to Scripts folder with ``cd Scripts``. |
| 437 | |
| 438 | Run the following script to save/deploy the Blueprint into the CDS database. |
| 439 | |
| 440 | .. code-block:: sh |
| 441 | |
| 442 | bash -x ./save-enriched-blueprint-ide.sh ../pnf-demo.zip |
| 443 | # bash -x ./save-enriched-blueprint-k8s.sh ../pnf-demo.zip |
| 444 | |
| 445 | Now you should see the new model "artifactName": "pnf_netconf" by calling ``bash -x ./get-cds-blueprint-models.sh`` |
| 446 | |
| 447 | **Config-Assign** |
| 448 | |
| 449 | The assumption is that we are using the same host to run PNF NETCONF simulator as well as CDS. You will need the |
| 450 | IP Adress of the Netconf server container which can be found out with |
| 451 | ``docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_id_or_name``. In the |
| 452 | following examples we will use 172.17.0.2. |
| 453 | |
| 454 | Day-1 configuration: |
| 455 | |
| 456 | .. code-block:: sh |
| 457 | |
| 458 | bash -x ./create-config-assing-data-ide.sh day-1 172.17.0.2 5 |
| 459 | # bash -x ./create-config-assing-data-k8s.sh day-1 172.17.0.2 5 |
| 460 | |
| 461 | You can verify the day-1 NETCONF RPC payload looking into CDS DB. You should see the NETCONF RPC with 5 |
| 462 | streams (fw_udp_1 TO fw_udp_5). Connect to the DB like mentioned above an run following statement. |
| 463 | |
| 464 | .. code-block:: sh |
| 465 | |
| 466 | MariaDB [sdnctl]> select * from TEMPLATE_RESOLUTION where resolution_key='day-1' AND artifact_name='netconfrpc'; |
| 467 | |
| 468 | <rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> |
| 469 | <edit-config> |
| 470 | <target> |
| 471 | <running/> |
| 472 | </target> |
| 473 | <config> |
| 474 | <sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin"> |
| 475 | <pg-streams> |
| 476 | <pg-stream> |
| 477 | <id>fw_udp_1</id> |
| 478 | <is-enabled>true</is-enabled> |
| 479 | </pg-stream> |
| 480 | <pg-stream> |
| 481 | <id>fw_udp_2</id> |
| 482 | <is-enabled>true</is-enabled> |
| 483 | </pg-stream> |
| 484 | <pg-stream> |
| 485 | <id>fw_udp_3</id> |
| 486 | <is-enabled>true</is-enabled> |
| 487 | </pg-stream> |
| 488 | <pg-stream> |
| 489 | <id>fw_udp_4</id> |
| 490 | <is-enabled>true</is-enabled> |
| 491 | </pg-stream> |
| 492 | <pg-stream> |
| 493 | <id>fw_udp_5</id> |
| 494 | <is-enabled>true</is-enabled> |
| 495 | </pg-stream> |
| 496 | </pg-streams> |
| 497 | </sample-plugin> |
| 498 | </config> |
| 499 | </edit-config> |
| 500 | </rpc> |
| 501 | |
| 502 | Create PNF configuration for resolution-key = day-2 (stream-count = 10). |
| 503 | You can verify the CURL command JSON pay load file /tmp/day-n-pnf-config.json |
| 504 | |
| 505 | .. code-block:: sh |
| 506 | |
| 507 | bash -x ./create-config-assing-data-ide.sh day-2 172.17.0.2 10 |
| 508 | # bash -x ./create-config-assing-data-k8s.sh day-2 172.17.0.2 10 |
| 509 | |
| 510 | You can verify the day-2 NETCONF RPC payload looking into CDS DB. You should see the NETCONF RPC with 10 |
| 511 | streams (fw_udp_1 TO fw_udp_10). Connect to the DB like mentioned above and run following statement. |
| 512 | |
| 513 | .. code-block:: sh |
| 514 | |
| 515 | MariaDB [sdnctl]> select * from TEMPLATE_RESOLUTION where resolution_key='day-2' AND artifact_name='netconfrpc'; |
| 516 | |
| 517 | <rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> |
| 518 | <edit-config> |
| 519 | <target> |
| 520 | <running/> |
| 521 | </target> |
| 522 | <config> |
| 523 | <sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin"> |
| 524 | <pg-streams> |
| 525 | <pg-stream> |
| 526 | <id>fw_udp_1</id> |
| 527 | <is-enabled>true</is-enabled> |
| 528 | </pg-stream> |
| 529 | <pg-stream> |
| 530 | <id>fw_udp_2</id> |
| 531 | <is-enabled>true</is-enabled> |
| 532 | </pg-stream> |
| 533 | <pg-stream> |
| 534 | <id>fw_udp_3</id> |
| 535 | <is-enabled>true</is-enabled> |
| 536 | </pg-stream> |
| 537 | <pg-stream> |
| 538 | <id>fw_udp_4</id> |
| 539 | <is-enabled>true</is-enabled> |
| 540 | </pg-stream> |
| 541 | <pg-stream> |
| 542 | <id>fw_udp_5</id> |
| 543 | <is-enabled>true</is-enabled> |
| 544 | </pg-stream> |
| 545 | <pg-stream> |
| 546 | <id>fw_udp_6</id> |
| 547 | <is-enabled>true</is-enabled> |
| 548 | </pg-stream> |
| 549 | <pg-stream> |
| 550 | <id>fw_udp_7</id> |
| 551 | <is-enabled>true</is-enabled> |
| 552 | </pg-stream> |
| 553 | <pg-stream> |
| 554 | <id>fw_udp_8</id> |
| 555 | <is-enabled>true</is-enabled> |
| 556 | </pg-stream> |
| 557 | <pg-stream> |
| 558 | <id>fw_udp_9</id> |
| 559 | <is-enabled>true</is-enabled> |
| 560 | </pg-stream> |
| 561 | <pg-stream> |
| 562 | <id>fw_udp_10</id> |
| 563 | <is-enabled>true</is-enabled> |
| 564 | </pg-stream> |
| 565 | </pg-streams> |
| 566 | </sample-plugin> |
| 567 | </config> |
| 568 | </edit-config> |
| 569 | </rpc> |
| 570 | |
| 571 | .. note:: |
| 572 | Till this point CDS did not interact with the PNF simulator or device. We just created the day-1 and day-2 |
| 573 | configurations and stored in CDS database |
| 574 | |
| 575 | **Config-Deploy:** |
| 576 | |
| 577 | Now we will make the CDS REST API calls to push the day-1 and day-2 configuration changes to the PNF simulator. |
| 578 | |
| 579 | If you run CDS in Kubernetes open a new terminal and keep it running with ``bash -x ./tail-cds-bp-log.sh``, |
| 580 | we can use it to review the config-deploy actions. If you run CDS in an IDE you can have a look into the IDE terminal. |
| 581 | |
| 582 | Following command will deploy day-1 configuration. |
| 583 | Syntax is ``# bash -x ./process-config-deploy.sh RESOLUTION_KEY PNF_IP_ADDRESS`` |
| 584 | |
| 585 | .. code-block:: sh |
| 586 | |
| 587 | bash -x ./process-config-deploy-ide.sh day-1 127.17.0.2 |
| 588 | # bash -x ./process-config-deploy-k8s.sh day-1 127.17.0.2 |
| 589 | |
| 590 | Go back to PNF netopeer cli console and verify if you can see 5 streams fw_udp_1 to fw_udp_5 enabled |
| 591 | |
| 592 | .. code-block:: sh |
| 593 | |
| 594 | > get --filter-xpath /sample-plugin:* |
| 595 | DATA |
| 596 | <sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin"> |
| 597 | <pg-streams> |
| 598 | <pg-stream> |
| 599 | <id>1</id> |
| 600 | <is-enabled>true</is-enabled> |
| 601 | </pg-stream> |
| 602 | <pg-stream> |
| 603 | <id>fw_udp_1</id> |
| 604 | <is-enabled>true</is-enabled> |
| 605 | </pg-stream> |
| 606 | <pg-stream> |
| 607 | <id>fw_udp_2</id> |
| 608 | <is-enabled>true</is-enabled> |
| 609 | </pg-stream> |
| 610 | <pg-stream> |
| 611 | <id>fw_udp_3</id> |
| 612 | <is-enabled>true</is-enabled> |
| 613 | </pg-stream> |
| 614 | <pg-stream> |
| 615 | <id>fw_udp_4</id> |
| 616 | <is-enabled>true</is-enabled> |
| 617 | </pg-stream> |
| 618 | <pg-stream> |
| 619 | <id>fw_udp_5</id> |
| 620 | <is-enabled>true</is-enabled> |
| 621 | </pg-stream> |
| 622 | </pg-streams> |
| 623 | </sample-plugin> |
| 624 | > |
| 625 | |
| 626 | The same can be done for day-2 config (follow same steps just with day-2 configuration) |
| 627 | |
| 628 | .. note:: |
| 629 | Through deployment we did not deploy the PNF, we just modified the PNF. The PNF could also be installed by CDS |
| 630 | but this is not targeted in this guide. |
| 631 | |
| 632 | .. tab:: Postman |