Update release notes for Frankfurt Maintenance release

update testsuite 1.6.4
fix foc links (submodules lead to broken links)

Issue-ID: INT-1652

Signed-off-by: mrichomme <morgan.richomme@orange.com>
Change-Id: Id83b1b589216317cd755f9d2eb844c6dfb1029c9
diff --git a/docs/docs_5G_oof_pci.rst b/docs/docs_5G_oof_pci.rst
index 6c0a260..8edabf4 100644
--- a/docs/docs_5G_oof_pci.rst
+++ b/docs/docs_5G_oof_pci.rst
@@ -41,7 +41,7 @@
 - In addition, the first step towards O-RAN alignment is being taken with SDN-C (R) being able to receive a DMaaP
   message containing configuration updates (which would be triggered when a neighbor-list-change occurs in the RAN
   and is communicated to ONAP over VES). Details of this implementation is available at:
-    https://wiki.onap.org/display/DW/CM+Notification+Support+in+ONAP
+  https://wiki.onap.org/display/DW/CM+Notification+Support+in+ONAP
 
 
   The end-to-end setup for the use case requires a Config DB which stores the cell related details of the RAN.
@@ -95,7 +95,7 @@
 
 Son-Handler installation:
 
-https://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/services/son-handler/installation.html
+https://docs.onap.org/projects/onap-dcaegen2/en/frankfurt/sections/services/son-handler/installation.html?highlight=dcaegen2
 
 
 Test Status and Plans
diff --git a/docs/docs_5g_rtpm.rst b/docs/docs_5g_rtpm.rst
index eaed678..5ecab4b 100644
--- a/docs/docs_5g_rtpm.rst
+++ b/docs/docs_5g_rtpm.rst
@@ -18,8 +18,8 @@
 
 Component and API descriptions can be found under:
 
-- `High Volume VNF Event Streaming (HV-VES) Collector <https://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/services/ves-hv/index.html>`_
-- `HV-VES (High Volume VES) <https://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/apis/ves-hv/index.html#hv-ves-high-volume-ves>`_
+- `High Volume VNF Event Streaming (HV-VES) Collector <https://docs.onap.org/projects/onap-dcaegen2/en/frankfurt/sections/services/ves-hv/index.html>`_
+- `HV-VES (High Volume VES) <https://docs.onap.org/projects/onap-dcaegen2/en/frankfurt/sections/apis/ves-hv/index.html#hv-ves-high-volume-ves>`_
 
 How to verify
 ~~~~~~~~~~~~~
diff --git a/docs/docs_CM_flexible_designer_orchestrator.rst b/docs/docs_CM_flexible_designer_orchestrator.rst
index 3a9dd7b..0cfd703 100644
--- a/docs/docs_CM_flexible_designer_orchestrator.rst
+++ b/docs/docs_CM_flexible_designer_orchestrator.rst
@@ -287,4 +287,4 @@
 are available to test with your vNF. Please refer to the Scale out
 release notes for further information.
 
-https://onap.readthedocs.io/en/latest/submodules/integration.git/docs/docs_scaleout.html#docs-scaleout
+https://docs.onap.org/projects/onap-integration/en/frankfurt/docs_scaleout.html
diff --git a/docs/docs_CM_schedule_optimizer.rst b/docs/docs_CM_schedule_optimizer.rst
index 9da2e53..28946b5 100644
--- a/docs/docs_CM_schedule_optimizer.rst
+++ b/docs/docs_CM_schedule_optimizer.rst
@@ -1,15 +1,15 @@
 .. This work is licensed under a Creative Commons Attribution 4.0
    International License. http://creativecommons.org/licenses/by/4.0
-   
-.. _docs_CM_schedule_optimizer: 
 
-Change Management Schedule Optimization 
+.. _docs_CM_schedule_optimizer:
+
+Change Management Schedule Optimization
 -------------------------------------------------------------
 
-Description 
+Description
 ~~~~~~~~~~~~~~
 
-The change management schedule optimizer automatically identifies a conflict-free schedule for executing changes across multiple network function instances. It takes into account constraints such as concurrency limits (how many instances can be executed simultaneously), time preferences (e.g., night time maintenance windows with low traffic volumes) and applies optimization techniques to generate schedules. 
+The change management schedule optimizer automatically identifies a conflict-free schedule for executing changes across multiple network function instances. It takes into account constraints such as concurrency limits (how many instances can be executed simultaneously), time preferences (e.g., night time maintenance windows with low traffic volumes) and applies optimization techniques to generate schedules.
 
-More details can be found here: 
-https://onap.readthedocs.io/en/latest/submodules/optf/cmso.git/docs/index.html
\ No newline at end of file
+More details can be found here:
+https://docs.onap.org/projects/onap-optf-cmso/en/latest/index.html#master-index
diff --git a/docs/docs_vFW_CNF_CDS.rst b/docs/docs_vFW_CNF_CDS.rst
index 77b618e..26bfe08 100644
--- a/docs/docs_vFW_CNF_CDS.rst
+++ b/docs/docs_vFW_CNF_CDS.rst
@@ -190,21 +190,21 @@
 - Instantiation broker
 
     The broker implements `infra_workload`_ API used to handle vf-module instantiation request comming from the SO. User directives were changed by SDNC directives what impacts also the way how a'la carte instantiation method works from the VID. There is no need to specify the user directives delivered from the separate file. Instead SDNC directives are delivered through SDNC preloading (a'la carte instantiation) or through the resource assignment performed by the CDS (Macro flow instantiation).
-    
-    
+
+
     For helm package instantiation following parameters have to be delivered in the SDNC directives:
-    
-    
+
+
     ======================== ==============================================
-    
+
     Variable                 Description
-    
+
     ------------------------ ----------------------------------------------
-    
-    k8s-rb-profile-name      Name of the override profile 
-    
+
+    k8s-rb-profile-name      Name of the override profile
+
     k8s-rb-profile-namespace Name of the namespace for created helm package
-    
+
     ======================== ==============================================
 
 - Default profile support was added to the plugin
@@ -293,7 +293,7 @@
         chartpath: templates/deployment.yaml
 
 
-Above we have exemplary manifest file of the RB profile. Since Frankfurt *override_values.yaml* file does not need to be used as instantiation values are passed to the plugin over Instance API of k8s plugin. In the example profile contains additional k8s helm template which will be added on demand 
+Above we have exemplary manifest file of the RB profile. Since Frankfurt *override_values.yaml* file does not need to be used as instantiation values are passed to the plugin over Instance API of k8s plugin. In the example profile contains additional k8s helm template which will be added on demand
 to the helm package during its installation. In our case, depending on the SO instantiation request input parameters, vPGN helm package can be enriched with additional ssh service. Such service will be dynamically added to the profile by CDS and later on CDS will upload whole custom RB profile to multicloud/k8s plugin.
 
 In order to support generation and upload of profile, our vFW CBA model has enhanced **resource-assignment** workflow which contains additional steps, **profile-modification** and **profile-upload**. For the last step custom Kotlin script included in the CBA is used to upload K8S profile into multicloud/k8s plugin.
@@ -337,7 +337,7 @@
             }
         },
 
-Profile generation step uses embedded into CDS functionality of templates processing and on its basis ssh port number (specified in the SO request as vpg-management-port) is included in the ssh service helm template. 
+Profile generation step uses embedded into CDS functionality of templates processing and on its basis ssh port number (specified in the SO request as vpg-management-port) is included in the ssh service helm template.
 
 ::
 
@@ -361,7 +361,7 @@
       chart: {{ .Chart.Name }}
 
 To upload of the profile is conducted with the CDS capability to execute Kotlin scripts. It allows to define any required controller logic. In our case we use to implement decision point and mechanisms of profile generation and upload.
-During the generation CDS extracts the RB profile template included in the CBA, includes there generated ssh service helm template, modifies the manifest of RB template by adding there ssh service and after its archivisation sends the profile to 
+During the generation CDS extracts the RB profile template included in the CBA, includes there generated ssh service helm template, modifies the manifest of RB template by adding there ssh service and after its archivisation sends the profile to
 k8s plugin.
 
 ::
@@ -2489,7 +2489,7 @@
 .. _SDC-2776: https://jira.onap.org/browse/SDC-2776
 .. _MULTICLOUD-941: https://jira.onap.org/browse/MULTICLOUD-941
 .. _CCSDK-2155: https://jira.onap.org/browse/CCSDK-2155
-.. _infra_workload: https://docs.onap.org/en/latest/submodules/multicloud/framework.git/docs/specs/multicloud_infra_workload.html
+.. _infra_workload: https://docs.onap.org/projects/onap-multicloud-framework/en/latest/specs/multicloud_infra_workload.html?highlight=multicloud
 .. _SDNC-1116: https://jira.onap.org/browse/SDNC-1116
 .. _SO-2727: https://jira.onap.org/browse/SO-2727
 .. _SDNC-1109: https://jira.onap.org/browse/SDNC-1109
diff --git a/docs/docs_vfwHPA.rst b/docs/docs_vfwHPA.rst
index 015b725..ed64e5e 100644
--- a/docs/docs_vfwHPA.rst
+++ b/docs/docs_vfwHPA.rst
@@ -219,7 +219,7 @@
 
         }'
 
-9. Register new cloud regions. This can be done using instructions (Step 1 to Step 3) on this `page <https://onap.readthedocs.io/en/latest/submodules/multicloud/framework.git/docs/multicloud-plugin-windriver/UserGuide-MultiCloud-WindRiver-TitaniumCloud.html#tutorial-onboard-instance-of-wind-river-titanium-cloud>`_. The already existing CloudOwner and cloud complex can be used. If step 3 does not work using the k8s ip and external port. It can be done using the internal ip address and port. Exec into any pod and run the command from the pod.
+9. Register new cloud regions. This can be done using instructions (Step 1 to Step 3) on this `page <https://docs.onap.org/projects/onap-multicloud-framework/en/latest/multicloud-plugin-windriver/UserGuide-MultiCloud-WindRiver-TitaniumCloud.html?highlight=multicloud>`_. The already existing CloudOwner and cloud complex can be used. If step 3 does not work using the k8s ip and external port. It can be done using the internal ip address and port. Exec into any pod and run the command from the pod.
 
 - Get msb-iag internal ip address and port
 
diff --git a/docs/docs_vfw_edgex_k8s.rst b/docs/docs_vfw_edgex_k8s.rst
index a25b349..e860fee 100644
--- a/docs/docs_vfw_edgex_k8s.rst
+++ b/docs/docs_vfw_edgex_k8s.rst
@@ -280,7 +280,7 @@
 
 An example is shown below for K8s cloud but following the steps 1,2,3
 from
-`here <https://onap.readthedocs.io/en/latest/submodules/multicloud/framework.git/docs/multicloud-plugin-windriver/UserGuide-MultiCloud-WindRiver-TitaniumCloud.html#tutorial-onboard-instance-of-wind-river-titanium-cloud>`__.
+`here <https://docs.onap.org/projects/onap-multicloud-framework/en/latest/multicloud-plugin-windriver/UserGuide-MultiCloud-WindRiver-TitaniumCloud.html?highlight=multicloud>`__.
 The sample input below is for k8s cloud type.
 
 **Step 1**: Cloud Registration/ Create a cloud region to represent the instance
diff --git a/docs/docs_vipsec.rst b/docs/docs_vipsec.rst
index 755d4c0..4ec8c6f 100644
--- a/docs/docs_vipsec.rst
+++ b/docs/docs_vipsec.rst
@@ -28,7 +28,7 @@
 
 
 1. Check that all the required components were deployed;
-   
+
  ``oom-rancher# helm list``
 
 2. Check the state of the pods;
@@ -37,14 +37,14 @@
 
 3. Run robot health check
 
-   ``oom-rancher# cd oom/kubernetes/robot``   
+   ``oom-rancher# cd oom/kubernetes/robot``
 
    ``oom-rancher# ./ete-k8s.sh onap health``
 
    Ensure all the required components pass the health tests
 4. Modify the SO bpmn configmap to change the SO vnf adapter endpoint to v2
-  
-   ``oom-rancher#    kubectl -n onap edit configmap dev-so-so-bpmn-infra-app-configmap`` 
+
+   ``oom-rancher#    kubectl -n onap edit configmap dev-so-so-bpmn-infra-app-configmap``
 
 			``- vnf:``
 
@@ -73,7 +73,7 @@
 
   ``oom-rancher# ./demo-k8s.sh onap init``
 
-7. Create HPA flavors in cloud regions to be registered with ONAP. All HPA flavor names must start with onap. During our tests, 3 cloud regions were registered and we created flavors in each cloud. The flavors match the flavors described in the test plan `here <https://wiki.onap.org/pages/viewpage.action?pageId=41421112>`_. 
+7. Create HPA flavors in cloud regions to be registered with ONAP. All HPA flavor names must start with onap. During our tests, 3 cloud regions were registered and we created flavors in each cloud. The flavors match the flavors described in the test plan `here <https://wiki.onap.org/pages/viewpage.action?pageId=41421112>`_.
 
 - **Cloud Region One**
 
@@ -81,7 +81,7 @@
      ``#nova flavor-create onap.hpa.flavor11 111 8 20 2``
 
      ``#nova flavor-key onap.hpa.flavor11 set hw:mem_page_size=2048``
-    
+
     **Flavor12**
      ``#nova flavor-create onap.hpa.flavor12 112 12 20 2``
 
@@ -90,9 +90,9 @@
      ``#openstack aggregate create --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:3 aggr121``
 
      ``#openstack flavor set onap.hpa.flavor12 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:3``
-    
+
     **Flavor13**
-     ``#nova flavor-create onap.hpa.flavor13 113 12 20 2``  
+     ``#nova flavor-create onap.hpa.flavor13 113 12 20 2``
 
      ``#nova flavor-key onap.hpa.flavor13 set hw:mem_page_size=2048``
 
@@ -110,7 +110,7 @@
      ``#nova flavor-key onap.hpa.flavor21 set hw:cpu_policy=dedicated``
 
      ``#nova flavor-key onap.hpa.flavor21 set hw:cpu_thread_policy=isolate``
-    
+
     **Flavor22**
      ``#nova flavor-create onap.hpa.flavor22 222 12 20 2``
 
@@ -119,9 +119,9 @@
      ``#openstack aggregate create --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:2 aggr221``
 
      ``#openstack flavor set onap.hpa.flavor22 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:2``
-    
+
     **Flavor23**
-     ``#nova flavor-create onap.hpa.flavor23 223 12 20 2``  
+     ``#nova flavor-create onap.hpa.flavor23 223 12 20 2``
 
      ``#nova flavor-key onap.hpa.flavor23 set hw:mem_page_size=2048``
 
@@ -139,20 +139,20 @@
      ``#nova flavor-key onap.hpa.flavor31 set hw:cpu_policy=dedicated``
 
      ``#nova flavor-key onap.hpa.flavor31 set hw:cpu_thread_policy=isolate``
-    
+
     **Flavor32**
      ``#nova flavor-create onap.hpa.flavor32 332 8192 20 2``
 
      ``#nova flavor-key onap.hpa.flavor32 set hw:mem_page_size=1048576``
- 
+
     **Flavor33**
-     ``#nova flavor-create onap.hpa.flavor33 333 12 20 2``  
+     ``#nova flavor-create onap.hpa.flavor33 333 12 20 2``
 
      ``#nova flavor-key onap.hpa.flavor33 set hw:mem_page_size=2048``
 
      ``#openstack aggregate create --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:1 aggr331``
 
-     ``#openstack flavor set onap.hpa.flavor33 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:1`` 
+     ``#openstack flavor set onap.hpa.flavor33 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:1``
 
 
 8. Check that the cloud complex has the right values and update if it does not. Required values are;
@@ -205,7 +205,7 @@
 
         }'
 
-9. Register new cloud regions. This can be done using instructions (Step 1 to Step 3) on this `page <https://onap.readthedocs.io/en/latest/submodules/multicloud/framework.git/docs/multicloud-plugin-windriver/UserGuide-MultiCloud-WindRiver-TitaniumCloud.html#tutorial-onboard-instance-of-wind-river-titanium-cloud>`_. The already existing CloudOwner and cloud complex can be used. If step 3 does not work using the k8s ip and external port. It can be done using the internal ip address and port. Exec into any pod and run the command from the pod.
+9. Register new cloud regions. This can be done using instructions (Step 1 to Step 3) on this `page <https://docs.onap.org/projects/onap-multicloud-framework/en/latest/multicloud-plugin-windriver/UserGuide-MultiCloud-WindRiver-TitaniumCloud.html?highlight=multicloud>`_. The already existing CloudOwner and cloud complex can be used. If step 3 does not work using the k8s ip and external port. It can be done using the internal ip address and port. Exec into any pod and run the command from the pod.
 
 - Get msb-iag internal ip address and port
 
@@ -215,7 +215,7 @@
 
   ``oom-rancher#  kubectl exec dev-oof-oof-6c848594c5-5khps -it -- bash``
 
-10. Put required subscription list into tenant for all the newly added cloud regions. An easy way to do this is to do a get on the default cloud region, copy the tenant information with the subscription. Then paste it in your put command and modify the region id, tenant-id, tenant-name and resource-version. 
+10. Put required subscription list into tenant for all the newly added cloud regions. An easy way to do this is to do a get on the default cloud region, copy the tenant information with the subscription. Then paste it in your put command and modify the region id, tenant-id, tenant-name and resource-version.
 
 **GET COMMAND**
 
@@ -360,14 +360,14 @@
                 }
             }'
 
-   
+
 11.  Onboard the vFW HPA template. The templates can be gotten from the `demo <https://github.com/onap/demo>`_ repo. The heat and env files used are located in demo/heat/vFW_HPA/vFW/. Create a zip file using the files. For onboarding instructions see steps 4 to 9 of `vFWCL instantiation, testing and debugging <https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging>`_. Note that in step 5, only one VSP is created. For the VSP the option to submit for testing in step 5cii was not shown. So you can check in and certify the VSP and proceed to step 6.
 
 12. Get the parameters (model info, model invarant id...etc) required to create a service instance via rest. This can be done by creating a service instance via VID as in step 10 of `vFWCL instantiation, testing and debugging <https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging>`_.  After creating the service instance, exec into the SO bpmn pod and look into the /app/logs/bpmn/debug.log file. Search for the service instance and look for its request details. Then populate the parameters required to create a service instance via rest in step 13 below.
 
 13. Create a service instance rest request but do not create service instance yet. Specify OOF as the homing solution and multicloud as the orchestrator. Be sure to use a service instance name that does not exist and populate the parameters with values gotten from step 12.
 
-:: 
+::
 
     curl -k -X POST \
     http://{{k8s}}:30277/onap/so/infra/serviceInstances/v6 \
@@ -448,14 +448,14 @@
         "onapName": "SampleDemo",
         "policyScope": "OSDF_DUBLIN"
     }' 'https://pdp:8081/pdp/api/updatePolicy'
-    
+
 
 To delete a policy, use two commands below to delete from PDP and PAP
 
 **DELETE POLICY INSIDE PDP**
 
 ::
- 
+
     curl -k -v -H 'Content-Type: application/json' \
      -H 'Accept: application/json' \
      -H 'ClientAuth: cHl0aG9uOnRlc3Q=' \
@@ -468,7 +468,7 @@
 **DELETE POLICY INSIDE PAP**
 
 ::
-    
+
     curl -k -v -H 'Content-Type: application/json' \
     -H 'Accept: application/json' \
     -H 'ClientAuth: cHl0aG9uOnRlc3Q=' \
@@ -495,7 +495,7 @@
 
 
 
-Push Policy    
+Push Policy
 
 ::
 
@@ -506,7 +506,7 @@
         }' 'https://pdp:8081/pdp/api/pushPolicy'
 
 
-    
+
 17. Create Service Instance using step 13 above
 
 18. Check bpmn logs to ensure that OOF sent homing response and flavor directives.
@@ -538,7 +538,7 @@
                     "vnf-vms": []
                 },
 
-    
+
                 "vnf-parameters": [
                     {
                      "vnf-parameter-name":"vf_module_id",
@@ -787,13 +787,13 @@
                     "service-type": "8c071bd1-c361-4157-8282-3fef7689d32e",
                     "vnf-name": "ipsec-test",
                     "vnf-type": "Ipsec..base_vipsec..module-0"
-                    				
+
                 }
             }
         }}
-    
 
-Change parameters based on your environment. 
+
+Change parameters based on your environment.
 
 **Note**
 
@@ -804,5 +804,5 @@
     "service-type": "8c071bd1-c361-4157-8282-3fef7689d32e",  <-- same as Service Instance ID
     "vnf-name": "ipsec-test",  <-- name to be given to the vf module
     "vnf-type": "Ipsec..base_vipsec..module-0" <-- can be found on the VID - VF Module dialog screen - Model Name
-        
+
 21. Create vf module (11g of `vFWCL instantiation, testing and debugging <https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging>`_). If everything worked properly, you should see the stack created in your VIM(WR titanium cloud openstack in this case).
diff --git a/docs/functional-requirements.csv b/docs/functional-requirements.csv
index 5e75fb5..ad90917 100644
--- a/docs/functional-requirements.csv
+++ b/docs/functional-requirements.csv
@@ -6,6 +6,6 @@
 Enable PNF software version at onboarding,`wiki page <https://jira.onap.org/browse/REQ-88?src=confmacro>`__,A.Schmid
 xNF communication security enhancements, `wiki page <https://wiki.onap.org/display/DW/xNF+communication+security+enhancements+-+Tests+Description+and+Status>`__,M.Przybysz
 ETSI Alignment SO plugin to support SOL003 to connect to an external VNFM,`wiki page <https://wiki.onap.org/display/DW/ETSI+Alignment+Support>`__,F.Oliveira Byung-Woo Jun
-Integration of CDS as an Actor, `wiki page <https://docs.onap.org/en/latest/submodules/policy/parent.git/docs/development/actors/cds/cds.html>`__, B.Sakoto R.K.Verma Y.Malakov
+Integration of CDS as an Actor, `wiki page <https://docs.onap.org/projects/onap-ccsdk-cds/en/latest/CDS_Designer_Guide.html?highlight=actors%2Fcds>`__, B.Sakoto R.K.Verma Y.Malakov
 3rd Party Operational Domain Manager, `wiki page <https://wiki.onap.org/display/DW/Third-party+Operational+Domain+Manager>`__, D.Patel
 Configuration & persistency, `wiki page <https://wiki.onap.org/pages/viewpage.action?pageId=64003184>`__,Reshmasree c Swaminathan S
diff --git a/docs/release-notes.rst b/docs/release-notes.rst
index 4f38d58..80170dd 100644
--- a/docs/release-notes.rst
+++ b/docs/release-notes.rst
@@ -97,12 +97,14 @@
 Robot Test Suites
 -----------------
 
-Version: 1.6.3
+Version: 1.6.4
+..............
 
-:Release Date: 2020-06-03
-:sha1: 8f4f6f64eb4626433e6f32eeb146a71d3c840935
+:Release Date: 2020-07-07
+:sha1: f863e0060b9e0b13822074d0180cab11aed87ad5
+
 
 **New Features**
 
-- bug Fixes(Teardown, control loop, alotteed properties)
-- CI support for hvves, 5GBulkPm and pnf-registrate
+- Some corrections for vLB CDS
+- Change owning-entity-id from hard coded to variable
diff --git a/test/security/check_certificates/check_certificates/check_certificates_validity.py b/test/security/check_certificates/check_certificates/check_certificates_validity.py
index a6fd9cd..c38a7dc 100644
--- a/test/security/check_certificates/check_certificates/check_certificates_validity.py
+++ b/test/security/check_certificates/check_certificates/check_certificates_validity.py
@@ -186,8 +186,15 @@
                     if test_port in nodeports_xfail_list:
                         error_waiver = True
                 else:  # internal mode
-                    test_url = service.spec.selector.app
                     test_port = port.port
+                    test_url = ''
+                    # in Internal mode there are 2 types
+                    # app
+                    # app.kubernetes.io/name
+                    try:
+                        test_url = service.spec.selector['app']
+                    except KeyError:
+                        test_url = service.spec.selector['app.kubernetes.io/name']
 
                 if test_port is not None:
                     LOGGER.info(
@@ -259,6 +266,15 @@
             node_ports_type_error_list=node_ports_type_error_list,
             node_ports_reset_error_list=node_ports_reset_error_list).dump(
             '{}/certificates.html'.format(args.dir))
+    else:
+        jinja_env.get_template('cert-internal.html.j2').stream(
+            node_ports_list=node_ports_list,
+            node_ports_ssl_error_list=node_ports_ssl_error_list,
+            node_ports_connection_error_list=node_ports_connection_error_list,
+            node_ports_type_error_list=node_ports_type_error_list,
+            node_ports_reset_error_list=node_ports_reset_error_list).dump(
+            '{}/certificates.html'.format(args.dir))
+
     return success_criteria