Gary Wu | cd47a01 | 2018-11-30 07:18:36 -0800 | [diff] [blame] | 1 | .. This work is licensed under a Creative Commons Attribution 4.0 |
| 2 | International License. http://creativecommons.org/licenses/by/4.0 |
| 3 | Copyright 2018 Huawei Technologies Co., Ltd. All rights reserved. |
| 4 | |
| 5 | .. _docs_vcpe: |
| 6 | |
Gary Wu | e4a2df8 | 2018-11-29 12:49:09 -0800 | [diff] [blame] | 7 | vCPE Use Case |
| 8 | ---------------------------- |
| 9 | |
| 10 | Description |
| 11 | ~~~~~~~~~~~ |
| 12 | vCPE use case is based on Network Enhanced Residential Gateway architecture specified in Technical Report 317 (TR-317), which defines how service providers deploy residential broadband services like High Speed Internet Access. The use case implementation has infrastructure services and customer service. The common infrastructure services are deployed first and shared by all customers. The use case demonstrates ONAP capabilities to design, deploy, configure and control sophisticated services. |
| 13 | |
| 14 | More details on the vCPE Use Case can be found on wiki page https://wiki.onap.org/pages/viewpage.action?pageId=3246168 |
| 15 | |
| 16 | Source Code |
| 17 | ~~~~~~~~~~~ |
| 18 | vcpe test scripts: https://gerrit.onap.org/r/gitweb?p=integration.git;a=tree;f=test/vcpe;h=76572f4912e7b375e1e4d0177a0e50a61691dc4a;hb=refs/heads/casablanca |
| 19 | |
| 20 | How to Use |
| 21 | ~~~~~~~~~~ |
| 22 | Most part of the use case has been automated by vcpe scripts. For the details on how to run the scripts, please refer to the use case tutorial on https://wiki.onap.org/display/DW/vCPE+Use+Case+Tutorial%3A+Design+and+Deploy+based+on+ONAP. |
| 23 | |
Yang Xu | aa08503 | 2019-06-17 09:44:34 -0400 | [diff] [blame^] | 24 | Here are the main steps to run the use case in Integration lab environment, where vCPE script is pre-installed on Rancher node under /root/integration/test/vcpe: |
| 25 | |
| 26 | 1. Run Robot script from Rancher node to onboard VNFs, create and distribute models for vCPE four infrastructure services, i.e. infrastructure, brg, bng and gmux |
| 27 | |
| 28 | :: |
| 29 | |
| 30 | demo-k8s.sh onap init |
| 31 | |
| 32 | 2. Run Robot to create and distribute for vCPE customer service. This step assumes step 1 successfully distributed the 4 models |
| 33 | |
| 34 | :: |
| 35 | |
| 36 | ete-k8s.sh onap distributevCPEResCust |
| 37 | |
| 38 | 3. Add customer SDN-ETHERNET-INTERNET (see the use case tutorial wiki page for detail) |
| 39 | |
| 40 | 4. Add identity-url to RegionOne data in A&AI. First use POSTMAN to GET cloud-region RegionOne data, then add identity-url and PUT back to A&AI |
| 41 | |
| 42 | :: |
| 43 | |
| 44 | GET https://{{aai}}:{{port}}/aai/v14/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne |
| 45 | |
| 46 | :: |
| 47 | |
| 48 | PUT https://{{aai}}:{{port}}/aai/v14/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne |
| 49 | { |
| 50 | "cloud-owner": "CloudOwner", |
| 51 | "cloud-region-id": "RegionOne", |
| 52 | "cloud-type": "SharedNode", |
| 53 | "owner-defined-type": "OwnerType", |
| 54 | "cloud-region-version": "v1", |
| 55 | "identity-url": "http://10.12.25.2:5000/v2.0", |
| 56 | "cloud-zone": "CloudZone", |
| 57 | "resource-version": "1559336510793", |
| 58 | "relationship-list": { |
| 59 | ... ... |
| 60 | |
| 61 | 5. Add route on sdnc cluster VM node, which is the cluster VM node where pod sdnc-sdnc-0 is running on. This will allow ONAP SDNC to configure BRG later on. |
| 62 | |
| 63 | :: |
| 64 | |
| 65 | ip route add 10.3.0.0/24 via 10.0.101.10 dev ens3 |
| 66 | |
| 67 | 6. Initialize SDNC ip pool by running command from Rancher node |
| 68 | |
| 69 | :: |
| 70 | |
| 71 | kubectl -n onap exec -it dev-sdnc-sdnc-0 -- /opt/sdnc/bin/addIpAddresses.sh VGW 10.5.0 22 250` |
| 72 | |
| 73 | 7. Install python and other python libraries |
| 74 | |
| 75 | :: |
| 76 | |
| 77 | integration/test/vcpe/bin/setup.sh |
| 78 | |
| 79 | |
| 80 | 8. Change the openstack env parameters and one customer service related parameter in vcpecommon.py |
| 81 | |
| 82 | :: |
| 83 | |
| 84 | cloud = { |
| 85 | '--os-auth-url': 'http://10.12.25.2:5000', |
| 86 | '--os-username': 'xxxxxxxxxx', |
| 87 | '--os-user-domain-id': 'default', |
| 88 | '--os-project-domain-id': 'default', |
| 89 | '--os-tenant-id': 'xxxxxxxxxxxxxxxx' if oom_mode else '1e097c6713e74fd7ac8e4295e605ee1e', |
| 90 | '--os-region-name': 'RegionOne', |
| 91 | '--os-password': 'xxxxxxxxxxx', |
| 92 | '--os-project-domain-name': 'xxxxxxxxx' if oom_mode else 'Integration-SB-07', |
| 93 | '--os-identity-api-version': '3' |
| 94 | } |
| 95 | |
| 96 | common_preload_config = { |
| 97 | 'oam_onap_net': 'xxxxxxxx' if oom_mode else 'oam_onap_lAky', |
| 98 | 'oam_onap_subnet': 'xxxxxxxxxx' if oom_mode else 'oam_onap_lAky', |
| 99 | 'public_net': 'xxxxxxxxx', |
| 100 | 'public_net_id': 'xxxxxxxxxxxxx' |
| 101 | } |
| 102 | |
| 103 | :: |
| 104 | |
| 105 | # CHANGEME: vgw_VfModuleModelInvariantUuid is in rescust service csar, open service template with filename like service-VcpesvcRescust1118-template.yml and look for vfModuleModelInvariantUUID under groups vgw module metadata. |
| 106 | self.vgw_VfModuleModelInvariantUuid = 'xxxxxxxxxxxxxxx' |
| 107 | |
| 108 | 9. Initialize vcpe |
| 109 | |
| 110 | :: |
| 111 | |
| 112 | vcpe.py init |
| 113 | |
| 114 | 9. Run the command from Rancher node to insert vcpe customer service workflow entry in SO catalogdb. You should be able to see the following command printed out from the above step output |
| 115 | |
| 116 | :: |
| 117 | |
| 118 | kubectl exec dev-mariadb-galera-mariadb-galera-0 -- mysql -uroot -psecretpassword -e "INSERT INTO catalogdb.service_recipe (ACTION, VERSION_STR, DESCRIPTION, ORCHESTRATION_URI, SERVICE_PARAM_XSD, RECIPE_TIMEOUT, SERVICE_TIMEOUT_INTERIM, CREATION_TIMESTAMP, SERVICE_MODEL_UUID) VALUES ('createInstance','1','vCPEResCust 2019-06-03 _04ba','/mso/async/services/CreateVcpeResCustService',NULL,181,NULL, NOW(),'6c4a469d-ca2c-4b02-8cf1-bd02e9c5a7ce')" |
| 119 | |
| 120 | 10. Instantiate vCPE infra services |
| 121 | |
| 122 | :: |
| 123 | |
| 124 | vcpe.py infra |
| 125 | |
| 126 | 11. Install curl command inside sdnc-sdnc-0 container |
| 127 | |
| 128 | 12. From Rancher node run command to check connectivity from sdnc to brg and gmux, and the configuration of brg and gmux |
| 129 | |
| 130 | :: |
| 131 | |
| 132 | healthcheck-k8s.py onap |
| 133 | |
| 134 | 13. Update libevel.so in vGMUX VM. See tutorial wiki for details |
| 135 | |
| 136 | 14. Run heatbridge. The heatbridge command usage: demo-k8s.sh <namespace> heatbridge <stack_name> <service_instance_id> <service> <oam-ip-address>. See an example as following: |
| 137 | |
| 138 | :: |
| 139 | |
| 140 | ~/integration/test/vcpe# ~/oom/kubernetes/robot/demo-k8s.sh onap heatbridge vcpe_vfmodule_e2744f48729e4072b20b_201811262136 d8914ef3-3fdb-4401-adfe-823ee75dc604 vCPEvGMUX 10.0.101.21 |
| 141 | |
| 142 | 15. Push new Policy. Download the policy files and follow steps in JIRA INT-1089 - Create vCPE closed loop policy and push to policy engine |
| 143 | |
| 144 | :: |
| 145 | |
| 146 | curl -k --silent --user 'healthcheck:zb!XztG34' -X POST "https://policy-api:6969/policy/api/v1/policytypes/onap.policies.controlloop.Operational/versions/1.0.0/policies" -H "Accept: application/json" -H "Content-Type: application/json" -d @operational.vcpe.json.txt |
| 147 | curl --silent -k --user 'healthcheck:zb!XztG34' -X POST "https://policy-pap:6969/policy/pap/v1/pdps/policies" -H "Accept: application/json" -H "Content-Type: application/json" -d @operational.vcpe.pap.json.txt |
| 148 | |
| 149 | 16. Start closeloop by triggering packet drop VES event. You may need to run the command twice if the first run fails |
| 150 | |
| 151 | :: |
| 152 | |
| 153 | vcpe.py loop |
| 154 | |
| 155 | |
Gary Wu | e4a2df8 | 2018-11-29 12:49:09 -0800 | [diff] [blame] | 156 | Test Status |
| 157 | ~~~~~~~~~~~~~~~~~~~~~ |
Yang Xu | aa08503 | 2019-06-17 09:44:34 -0400 | [diff] [blame^] | 158 | The use case has been tested for Dublin release, the test report can be found on https://wiki.onap.org/display/DW/vCPE+%28Heat%29+-+Dublin+Test+Status |
Gary Wu | e4a2df8 | 2018-11-29 12:49:09 -0800 | [diff] [blame] | 159 | |
| 160 | Known Issues and Workaround |
| 161 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
Yang Xu | aa08503 | 2019-06-17 09:44:34 -0400 | [diff] [blame^] | 162 | 1) NATs are installed on BRG and vBNG. In order to allow SDNC to send BRG configuration message through vBNG, SDNC host VM IP address is preloaded on BRG and vBNG, and provisioned into the firewalls. If SDNC changes its host VM, SDNC host VM IP changes and we need to manually update the IP in /opt/config/sdnc_ip.txt. Then run: |
Gary Wu | e4a2df8 | 2018-11-29 12:49:09 -0800 | [diff] [blame] | 163 | |
| 164 | :: |
| 165 | |
| 166 | root>vppctl tap delete tap-0 |
| 167 | root>vppctl tap delete tap-1 |
| 168 | root>/opt/nat_service.sh |
| 169 | root>vppctl restart |
| 170 | |
Gary Wu | e4a2df8 | 2018-11-29 12:49:09 -0800 | [diff] [blame] | 171 | |