Insert SDNC ip pool automatically only if oom_mode is enabled
Change-Id: Ia1fe923f877cf76e51bcc1f23c9bf562a877b112
Signed-off-by: Bartek Grzybowski <b.grzybowski@partner.samsung.com>
Issue-ID: TEST-222
diff --git a/docs/docs_vCPE.rst b/docs/docs_vCPE.rst
index 12320d9..25ea35e 100644
--- a/docs/docs_vCPE.rst
+++ b/docs/docs_vCPE.rst
@@ -9,7 +9,7 @@
Description
~~~~~~~~~~~
-vCPE use case is based on Network Enhanced Residential Gateway architecture specified in Technical Report 317 (TR-317), which defines how service providers deploy residential broadband services like High Speed Internet Access. The use case implementation has infrastructure services and customer service. The common infrastructure services are deployed first and shared by all customers. The use case demonstrates ONAP capabilities to design, deploy, configure and control sophisticated services.
+vCPE use case is based on Network Enhanced Residential Gateway architecture specified in Technical Report 317 (TR-317), which defines how service providers deploy residential broadband services like High Speed Internet Access. The use case implementation has infrastructure services and customer service. The common infrastructure services are deployed first and shared by all customers. The use case demonstrates ONAP capabilities to design, deploy, configure and control sophisticated services.
More details on the vCPE Use Case can be found on wiki page https://wiki.onap.org/pages/viewpage.action?pageId=3246168
@@ -25,12 +25,12 @@
1. Run Robot script from Rancher node to onboard VNFs, create and distribute models for vCPE four infrastructure services, i.e. infrastructure, brg, bng and gmux
-::
+::
demo-k8s.sh onap init
-
-2. Add route on sdnc cluster VM node, which is the cluster VM node where pod sdnc-sdnc-0 is running on. This will allow ONAP SDNC to configure BRG later on.
-
+
+2. Add route on sdnc cluster VM node, which is the cluster VM node where pod sdnc-sdnc-0 is running on. This will allow ONAP SDNC to configure BRG later on.
+
::
ip route add 10.3.0.0/24 via 10.0.101.10 dev ens3
@@ -39,15 +39,15 @@
3. Install Python and other Python libraries
::
-
+
integration/test/vcpe/bin/setup.sh
4. Change the Openstack env parameters and one customer service related parameter in vcpecommon.py
-::
+::
- cloud = {
+ cloud = {
'--os-auth-url': 'http://10.12.25.2:5000',
'--os-username': 'xxxxxxxxxx',
'--os-user-domain-id': 'default',
@@ -56,64 +56,70 @@
'--os-region-name': 'RegionOne',
'--os-password': 'xxxxxxxxxxx',
'--os-project-domain-name': 'xxxxxxxxx' if oom_mode else 'Integration-SB-07',
- '--os-identity-api-version': '3'
- }
+ '--os-identity-api-version': '3'
+ }
- common_preload_config = {
+ common_preload_config = {
'oam_onap_net': 'xxxxxxxx' if oom_mode else 'oam_onap_lAky',
'oam_onap_subnet': 'xxxxxxxxxx' if oom_mode else 'oam_onap_lAky',
'public_net': 'xxxxxxxxx',
'public_net_id': 'xxxxxxxxxxxxx'
- }
+ }
::
- # CHANGEME: vgw_VfModuleModelInvariantUuid is in rescust service csar, open service template with filename like service-VcpesvcRescust1118-template.yml and look for vfModuleModelInvariantUUID under groups vgw module metadata.
+ # CHANGEME: vgw_VfModuleModelInvariantUuid is in rescust service csar, open service template with filename like service-VcpesvcRescust1118-template.yml and look for vfModuleModelInvariantUUID under groups vgw module metadata.
self.vgw_VfModuleModelInvariantUuid = 'xxxxxxxxxxxxxxx'
-5. Initialize vcpe
+5. If running with oom_mode=False initialize SDNC ip pool by running below command from k8s control node. It will be done automatically otherwise.
::
-
+
+ kubectl -n onap exec -it dev-sdnc-sdnc-0 -- /opt/sdnc/bin/addIpAddresses.sh VGW 10.5.0 22 250
+
+6. Initialize vcpe
+
+::
+
vcpe.py init
-6. If running with oom_mode=False run a command printed at the end of the above step from k8s control node to insert vcpe customer service workflow entry in SO catalogdb. It will be done automatically otherwise.
+7. If running with oom_mode=False run a command printed at the end of the above step from k8s control node to insert vcpe customer service workflow entry in SO catalogdb. It will be done automatically otherwise.
-7. Run Robot to create and distribute for vCPE customer service. This step assumes step 1 has successfully distributed all vcpe models except customer service model
+8. Run Robot to create and distribute for vCPE customer service. This step assumes step 1 has successfully distributed all vcpe models except customer service model
::
ete-k8s.sh onap distributevCPEResCust
-8. Instantiate vCPE infra services
+9. Instantiate vCPE infra services
::
vcpe.py infra
-9. From Rancher node run vcpe healthcheck command to check connectivity from sdnc to brg and gmux, and vpp configuration of brg and gmux.
+10. From Rancher node run vcpe healthcheck command to check connectivity from sdnc to brg and gmux, and vpp configuration of brg and gmux.
::
healthcheck-k8s.py --namespace <namespace name> --environment <env name>
-10. Instantiate vCPE customer service.
+11. Instantiate vCPE customer service.
::
vcpe.py customer
-11. Update libevel.so in vGMUX VM and restart the VM. This allows vGMUX to send events to VES collector in close loop test. See tutorial wiki for details
+12. Update libevel.so in vGMUX VM and restart the VM. This allows vGMUX to send events to VES collector in close loop test. See tutorial wiki for details
-12. Run heatbridge. The heatbridge command usage: demo-k8s.sh <namespace> heatbridge <stack_name> <service_instance_id> <service> <oam-ip-address>, please refer to vCPE tutorial page on how to fill in those paraemters. See an example as following:
+13. Run heatbridge. The heatbridge command usage: demo-k8s.sh <namespace> heatbridge <stack_name> <service_instance_id> <service> <oam-ip-address>, please refer to vCPE tutorial page on how to fill in those paraemters. See an example as following:
::
~/integration/test/vcpe# ~/oom/kubernetes/robot/demo-k8s.sh onap heatbridge vcpe_vfmodule_e2744f48729e4072b20b_201811262136 d8914ef3-3fdb-4401-adfe-823ee75dc604 vCPEvGMUX 10.0.101.21
-13. Start closed loop test by triggering packet drop VES event, and monitor if vGMUX is restarting. You may need to run the command twice if the first run fails
+14. Start closed loop test by triggering packet drop VES event, and monitor if vGMUX is restarting. You may need to run the command twice if the first run fails
-::
+::
vcpe.py loop