Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 1 | ************************************ |
| 2 | vFWCL on Ealto ONAP offline platform |
| 3 | ************************************ |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 4 | |
| 5 | |image0| |
| 6 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 7 | This document is collecting notes we have from running vFirewall demo on offline Elalto platform |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 8 | installed by ONAP offline installer tool. |
| 9 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 10 | Overall it's slightly more complicated than in Dublin mainly due to POLICY-2191 issue. |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 11 | |
| 12 | Some of the most relevant materials are available on following links: |
| 13 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 14 | * `oom_quickstart_guide.html <https://docs.onap.org/en/elalto/submodules/oom.git/docs/oom_quickstart_guide.html>`_ |
| 15 | * `docs_vfw.html <https://docs.onap.org/en/elalto/submodules/integration.git/docs/docs_vfw.html>`_ |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 16 | |
| 17 | |
| 18 | .. contents:: Table of Contents |
| 19 | :depth: 2 |
| 20 | |
| 21 | |
| 22 | |
| 23 | Step 1. Preconditions - before ONAP deployment |
| 24 | ============================================== |
| 25 | |
| 26 | Understanding of the underlying OpenStack deployment is required from anyone applying these instructions. |
| 27 | |
| 28 | In addition, installation-specific location of the helm charts on the infra node must be known. |
| 29 | In this document it is referred to as <helm_charts_dir> |
| 30 | |
| 31 | Snippets below are describing areas we need to configure for successfull vFWCL demo. |
| 32 | |
| 33 | Pay attention to them and configure it (ideally before deployment) accordingly. |
| 34 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 35 | .. note:: We are using standard OOM kubernetes/onap/resources/overrides/onap-all.yaml override to enable all components, however looks that better tailored one onap-vfw.yaml exists in the same folder. In following description we would be focusing on just other override values specific for lab environment. |
| 36 | |
| 37 | **1) Override for Update APPC / Robot and SO parts**:: |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 38 | |
| 39 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 40 | appc: |
| 41 | enabled: true |
| 42 | config: |
| 43 | openStackType: "OpenStackProvider" |
| 44 | openStackName: "OpenStack" |
| 45 | openStackKeyStoneUrl: "http://10.20.30.40:5000/v2.0" |
| 46 | openStackServiceTenantName: "service" |
| 47 | openStackDomain: "default" |
| 48 | openStackUserName: "onap-tieto" |
| 49 | openStackEncryptedPassword: "31ECA9F2BA98EF34C9EC3412D071E31185F6D9522808867894FF566E6118983AD5E6F794B8034558" |
| 50 | robot: |
| 51 | enabled: true |
| 52 | appcUsername: "appc@appc.onap.org" |
| 53 | appcPassword: "demo123456!" |
| 54 | openStackKeyStoneUrl: "http://10.20.30.40:5000" |
| 55 | openStackPublicNetId: "9403ceea-0738-4908-a826-316c8541e4bb" |
| 56 | openStackTenantId: "b1ce7742d956463999923ceaed71786e" |
| 57 | openStackUserName: "onap-tieto" |
| 58 | ubuntu14Image: "trusty" |
| 59 | openStackPrivateNetId: "3c7aa2bd-ba14-40ce-8070-6a0d6a617175" |
| 60 | openStackPrivateSubnetId: "2bcb9938-9c94-4049-b580-550a44dc63b3" |
| 61 | openStackPrivateNetCidr: "10.0.0.0/16" |
| 62 | openStackSecurityGroup: "onap_sg" |
| 63 | openStackOamNetworkCidrPrefix: "10.0" |
| 64 | openStackPublicNetworkName: "rc3-offline-network" |
| 65 | vnfPrivateKey: '/var/opt/ONAP/onap-dev.pem' |
| 66 | vnfPubKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPwF2bYm2QuqZpjuAcZDJTcFdUkKv4Hbd/3qqbxf6g5ZgfQarCi+mYnKe9G9Px3CgFLPdgkBBnMSYaAzMjdIYOEdPKFTMQ9lIF0+i5KsrXvszWraGKwHjAflECfpTAWkPq2UJUvwkV/g7NS5lJN3fKa9LaqlXdtdQyeSBZAUJ6QeCE5vFUplk3X6QFbMXOHbZh2ziqu8mMtP+cWjHNBB47zHQ3RmNl81Rjv+QemD5zpdbK/h6AahDncOY3cfN88/HPWrENiSSxLC020sgZNYgERqfw+1YhHrclhf3jrSwCpZikjl7rqKroua2LBI/yeWEta3amTVvUnR2Y7gM8kHyh Generated-by-Nova" |
| 67 | demoArtifactsVersion: "1.4.0" |
| 68 | demoArtifactsRepoUrl: "https://nexus.onap.org/content/repositories/releases" |
| 69 | scriptVersion: "1.4.0" |
| 70 | config: |
| 71 | # openStackEncryptedPasswordHere should match the encrypted string used in SO and APPC and overridden per environment |
| 72 | openStackEncryptedPasswordHere: "f7920677e15e2678b0f33736189e8965" |
| 73 | so: |
| 74 | enabled: true |
| 75 | config: |
| 76 | openStackUserName: "onap-tieto" |
| 77 | openStackRegion: "RegionOne" |
| 78 | openStackKeyStoneUrl: "http://10.20.30.40:5000" |
| 79 | openStackServiceTenantName: "services" |
| 80 | openStackEncryptedPasswordHere: "31ECA9F2BA98EF34C9EC3412D071E31185F6D9522808867894FF566E6118983AD5E6F794B8034558" |
| 81 | so-catalog-db-adapter: |
| 82 | config: |
| 83 | openStackUserName: "onap-tieto" |
| 84 | openStackKeyStoneUrl: "http://10.20.30.40:5000/v2.0" |
| 85 | openStackEncryptedPasswordHere: "31ECA9F2BA98EF34C9EC3412D071E31185F6D9522808867894FF566E6118983AD5E6F794B8034558" |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 86 | |
| 87 | |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 88 | |
| 89 | |
| 90 | |
| 91 | Step 2. Preconditions - after ONAP deployment |
| 92 | ============================================= |
| 93 | |
| 94 | |
| 95 | Run HealthChecks after successful deployment, all of them must pass |
| 96 | |
| 97 | Relevant robot scripts are under <helm_charts_dir>/oom/kubernetes/robot |
| 98 | |
| 99 | :: |
| 100 | |
| 101 | [root@tomas-infra robot]# ./ete-k8s.sh onap health |
| 102 | |
| 103 | 61 critical tests, 61 passed, 0 failed |
| 104 | 61 tests total, 61 passed, 0 failed |
| 105 | |
| 106 | very useful page describing commands for `manual checking of HC’s <https://wiki.onap.org/display/DW/Robot+Healthcheck+Tests+on+ONAP+Components#RobotHealthcheckTestsonONAPComponents-ApplicationController(APPC)Healthcheck>`_ |
| 107 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 108 | Unfortunatelly some patching is still required to get vFWCL working on ONAP plaform. |
| 109 | Therefore we provided a bunch of files and put them into ./patches folder within this repo. |
| 110 | |
| 111 | After installation is finished and all healthchecks are green it is still required to patch few things. |
| 112 | Those will be described in following part. |
| 113 | |
| 114 | |
| 115 | Step 3. Patching |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 116 | ============================ |
| 117 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 118 | In order to get vFWCL working in our lab on offline platform, we need to ensure 3 things except healthecks prior proceeding |
| 119 | with official instructions. |
| 120 | |
| 121 | **robot** |
| 122 | a) private key for robot has to be configured properly and contain key present on robot pod |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 123 | |
| 124 | :: |
| 125 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 126 | # open configmap for robot and check GLOBAL_INJECTED_PRIVATE_KEY param |
| 127 | kubectl edit configmap onap-robot-robot-eteshare-configmap |
| 128 | # it should contain stuff like |
| 129 | # GLOBAL_INJECTED_PRIVATE_KEY = '/var/opt/ONAP/onap-dev.pem' |
| 130 | |
| 131 | we need to put some private key for that and that key must match with public key distributed to vFWCL VMs which is |
| 132 | coming from *vnfPubKey* parameter in robot |
| 133 | |
| 134 | b) in our lab there is some issue with cloud-init and vFW VMs are getting default route set quite randomly, |
| 135 | which is an issue as in our lab we specified following dedicated network for vFW VMs public connectivity. |
| 136 | |
| 137 | .. note:: same network has to be reachable from k8s host where robot container is |
| 138 | |
| 139 | +--------------------------------------+----------------------------------------------+----------------------------------+-------------------------------------------------------+ |
| 140 | | id | name | tenant_id | subnets | |
| 141 | +--------------------------------------+----------------------------------------------+----------------------------------+-------------------------------------------------------+ |
| 142 | | 9403ceea-0738-4908-a826-316c8541e4bb | rc3-offline-network | b1ce7742d956463999923ceaed71786e | 1782c82c-cd92-4fb6-a292-5e396afe63ec 10.8.8.0/24 | |
| 143 | +--------------------------------------+----------------------------------------------+----------------------------------+-------------------------------------------------------+ |
| 144 | |
| 145 | for this reason we are patching *base_vfw.yaml* for all vFW VMs with following code |
| 146 | |
| 147 | :: |
| 148 | |
| 149 | # nasty hack to bypass cloud-init issues |
| 150 | sed -i '1i nameserver 8.8.8.8' /etc/resolv.conf |
Denis Kasanic | fd2a506 | 2019-12-04 13:40:07 +0100 | [diff] [blame] | 151 | iface_correct=`ip a | grep <network_prefix> | awk {'print $7'}` |
| 152 | route add default gw <network_prefix>.1 ${iface_correct} |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 153 | |
Denis Kasanic | fd2a506 | 2019-12-04 13:40:07 +0100 | [diff] [blame] | 154 | Network prefix variable is in our case "10.8.8". |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 155 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 156 | Lets treat it as an example of how these two problems can be fixed. Feel free to adjust private/public key and skip cloud-init problem if you don't have it. |
| 157 | Our helping script with above setting is fixing both issues (a) and (b) for us. |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 158 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 159 | :: |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 160 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 161 | # copy offline-installer repo into infra node and run following script from patches folder |
Denis Kasanic | fd2a506 | 2019-12-04 13:40:07 +0100 | [diff] [blame] | 162 | ./update_robot.sh <namespace> <network_prefix> |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 163 | |
| 164 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 165 | **drools** |
| 166 | c) usecases controller is not working - POLICY-2191 |
| 167 | |
| 168 | There are couple of pom files required in order to get usecases controller in drools pod instantiated properly. |
| 169 | One can fix it by running following script. |
| 170 | |
| 171 | :: |
| 172 | |
| 173 | # copy offline-installer repo into infra node and run following script from patches folder |
Denis Kasanic | fd2a506 | 2019-12-04 13:40:07 +0100 | [diff] [blame] | 174 | ./update_policy.sh <namespace> |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 175 | |
| 176 | .. note:: This script is also restarting policy, there is some small chance that drools will be marked as sick during interval its being restarted and redeployed. If it happens, just try again. |
| 177 | |
| 178 | At this moment one can check that usecases controller is build properly via: |
| 179 | |
| 180 | :: |
| 181 | |
| 182 | # on infra node |
| 183 | kubectl exec -it onap-policy-drools-0 bash |
| 184 | bash-4.4$ telemetry |
| 185 | Version: 1.0.0 |
| 186 | https://localhost:9696/policy/pdp/engine> get controllers |
| 187 | HTTP/1.1 200 OK |
| 188 | Content-Length: 24 |
| 189 | Content-Type: application/json |
| 190 | Date: Mon, 04 Nov 2019 06:31:09 GMT |
| 191 | Server: Jetty(9.4.20.v20190813) |
| 192 | |
| 193 | [ |
| 194 | "amsterdam", |
| 195 | "usecases" |
| 196 | ] |
| 197 | |
| 198 | |
| 199 | Now we can proceed with same steps as on online platform. |
| 200 | |
| 201 | |
| 202 | Step 4. robot init - demo services distribution |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 203 | ================================================ |
| 204 | |
| 205 | Run following robot script to execute both init_customer + distribute |
| 206 | |
| 207 | :: |
| 208 | |
| 209 | # demo-k8s.sh <namespace> init |
| 210 | |
| 211 | [root@tomas-infra robot]# ./demo-k8s.sh onap init |
| 212 | |
| 213 | |
| 214 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 215 | Step 5. robot instantiateVFW |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 216 | ============================ |
| 217 | |
| 218 | Following tag is used for whole vFWCL testcase. It will deploy single heat stack with 3 VMs and set policies and APPC mount point for vFWCL to happen. |
| 219 | |
| 220 | :: |
| 221 | |
| 222 | # demo-k8s.sh <namespace> instantiateVFW |
| 223 | |
| 224 | root@tomas-infra robot]# ./demo-k8s.sh onap instantiateVFW |
| 225 | |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 226 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 227 | Step 6. verify vFW |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 228 | ================== |
| 229 | |
| 230 | Verify VFWCL. This step is just to verify CL functionality, which can be also verified by checking DarkStat GUI on vSINK VM <sink_ip:667> |
| 231 | |
| 232 | :: |
| 233 | |
| 234 | # demo-k8s.sh <namespace> vfwclosedloop <pgn-ip-address> |
| 235 | # e.g. where 10.8.8.5 is IP from public network dedicated to vPKG VM |
| 236 | root@tomas-infra robot]# ./demo-k8s.sh onap vfwclosedloop 10.8.8.5 |
| 237 | |
Michal Ptacek | 7168a9a | 2019-11-04 06:45:08 +0000 | [diff] [blame] | 238 | .. |image0| image:: images/vFWCL.jpg |
Michal Ptacek | 2a96f15 | 2019-07-04 13:34:53 +0200 | [diff] [blame] | 239 | :width: 387px |
| 240 | :height: 393px |