Integrated gateway and updated kube support
Restructured the test env to decouple the test engine from the components
Issue-ID: NONRTRIC-441
Signed-off-by: BjornMagnussonXA <bjorn.magnusson@est.tech>
Change-Id: I07c746741b1c5c964679545f0a12861e5e9f6292
diff --git a/test/common/README.md b/test/common/README.md
index 2422eb9..120a826 100644
--- a/test/common/README.md
+++ b/test/common/README.md
@@ -16,7 +16,7 @@
`compare_json.py` \
A python script to compare two json obects for equality. Note that the comparsion always sort json-arrays before comparing (that is, it does not care about the order of items within the array). In addition, the target json object may specify individual parameter values where equality is 'dont care'.
-`consult_cbs_function.sh` \
+`consul_cbs_function.sh` \
Contains functions for managing Consul and CBS as well as create the configuration for the PMS.
`control_panel_api_function.sh` \
@@ -49,9 +49,15 @@
`extract_sdnc_reply.py` \
A python script to extract the information from an sdnc (A1 Controller) reply json. Helper for the test environment.
+`gateway_api_functions.sh` \
+Contains functions for managing the Non-RT RIC Gateway
+
`http_proxy_api_functions.sh` \
Contains functions for managing the Http Proxy
+`kube_proxy_api_functions.sh` \
+Contains functions for managing the Kube Proxy - to gain access to all services pod inside a kube cluster.
+
`mr_api_functions.sh` \
Contains functions for managing the MR Stub and the Dmaap Message Router
@@ -66,7 +72,7 @@
`test_env*.sh` \
Common env variables for test in the auto-test dir. All configuration of port numbers, image names and version etc shall be made in this file.
-Used by the auto test scripts/suites but could be used for other test script as well. The test cases shall be started with the file for the intended target using command line argument '--env-file'. There are preconfigured env files, pattern 'test_env*.sh', in ../common.
+Used by the auto test scripts/suites but could be used for other test script as well. The test cases shall be started with the file for the intended target using command line argument '--env-file'.
`testcase_common.sh` \
Common functions for auto test cases in the auto-test dir. This script is the foundation of test auto environment which sets up images and enviroment variables needed by this script as well as the script adapting to the APIs.
@@ -101,6 +107,13 @@
| `--use-release-image` | The script will use images from the nexus release repo for the supplied apps, space separated list of app short names |
| `help` | Print this info along with the test script description and the list of app short names supported |
+## Function: setup_testenvironment
+Main function to setup the test environment before any tests are started.
+Must be called right after sourcing all component scripts.
+| arg list |
+|--|
+| None |
+
## Function: indent1 ##
Indent every line of a command output with one space char.
| arg list |
@@ -155,16 +168,6 @@
| --------- | ----------- |
| `<deviation-message-to-print>` | Any text message describing the deviation. The text will also be printed in the test report. The intention is to mark known deviations, compared to required functionality |
-## Function: get_kube_sim_host ##
-Translate ric name to kube host name.
-| arg list |
-|--|
-| `<ric-name>` |
-
-| parameter | description |
-| --------- | ----------- |
-| `<ric-name>` | The name of the ric to translate into a host name (ip) |
-
## Function: clean_environment ##
Stop and remove all containers (docker) or resources (kubernetes). Containers not part of the test are not affected (docker only). Removes all resources started by previous kube tests (kube only).
| arg list |
@@ -188,37 +191,12 @@
| `<sleep-time-in-sec> ` | Number of seconds to sleep |
| `<any-text-in-quotes-to-be-printed>` | Optional. The text will be printed, if present |
-## Function: generate_uuid ##
-Geneate a UUID prefix to use along with the policy instance number when creating/deleting policies. Sets the env var UUID.
-UUID is then automatically added to the policy id in GET/PUT/DELETE.
-| arg list |
-|--|
-| None |
-
-## Function: check_policy_agent_logs ##
-Check the Policy Agent log for any warnings and errors and print the count of each.
-| arg list |
-|--|
-| None |
-
-## Function: check_ecs_logs ##
-Check the ECS log for any warnings and errors and print the count of each.
-| arg list |
-|--|
-| None |
-
## Function: check_control_panel_logs ##
Check the Control Panel log for any warnings and errors and print the count of each.
| arg list |
|--|
| None |
-## Function: check_sdnc_logs ##
-Check the SDNC log for any warnings and errors and print the count of each.
-| arg list |
-|--|
-| None |
-
## Function: store_logs ##
Take a snap-shot of all logs for all running containers and stores them in `./logs/<ATC-id>`. All logs will get the specified prefix in the file name. In general, one of the last steps in an auto-test script shall be to call this function. If logs shall be taken several times during a test script, different prefixes shall be used each time.
| arg list |
@@ -313,6 +291,13 @@
|--|
| `[<response-code>]*` |
+
+## Function: check_policy_agent_logs ##
+Check the Policy Agent log for any warnings and errors and print the count of each.
+| arg list |
+|--|
+| None |
+
## Function: api_equal() ##
Tests if the array length of a json array in the Policy Agent simulator is equal to a target value.
@@ -727,7 +712,7 @@
| `[<response-code>]*` | A space separated list of http response codes, may be empty to reset to 'no codes'. |
-# Description of functions in consult_cbs_function.sh #
+# Description of functions in consul_cbs_function.sh #
## Function: consul_config_app ##
@@ -799,6 +784,12 @@
|--|
| None |
+## Function: check_sdnc_logs ##
+Check the SDNC log for any warnings and errors and print the count of each.
+| arg list |
+|--|
+| None |
+
## Function: controller_api_get_A1_policy_ids ##
Test of GET policy ids towards OSC or STD type simulator.
To test response code only, provide the response code, 'OSC' + policy type or 'STD'
@@ -988,7 +979,11 @@
|--|
| None |
-# Description of functions in ecs_api_function.sh #
+## Function: check_ecs_logs ##
+Check the ECS log for any warnings and errors and print the count of each.
+| arg list |
+|--|
+| None |
## Function: ecs_equal ##
Tests if a variable value in the ECS is equal to a target value.
@@ -1275,6 +1270,59 @@
| `<response-code>` | Expected http response code |
| `<type>` | Type id, if the interface supports type in url |
+# Description of functions in gateway_api_functions.sh #
+
+
+## Function: use_gateway_http ##
+Use http for all calls to the gateway. This is set by default.
+| arg list |
+|--|
+| None |
+
+## Function: use_gateway_https ##
+Use https for all calls to the gateway.
+| arg list |
+|--|
+| None |
+
+## Function: set_gateway_debug ##
+Set debug level logging in the gateway
+| arg list |
+|--|
+| None |
+
+## Function: set_gateway_trace ##
+Set debug level logging in the trace
+| arg list |
+|--|
+| None |
+
+## Function: start_gateway ##
+Start the the gateway container in docker or kube depending on start mode
+| arg list |
+|--|
+| None |
+
+## Function: gateway_pms_get_status ##
+Sample test of pms api (status)
+| arg list |
+|--|
+| `<response-code> ` |
+
+| parameter | description |
+| --------- | ----------- |
+| `<response-code>` | Expected http response code |
+
+## Function: gateway_ecs_get_types ##
+Sample test of ecs api (get ei type)
+Only response code tested - not payload
+| arg list |
+|--|
+| `<response-code> ` |
+
+| parameter | description |
+| --------- | ----------- |
+| `<response-code>` | Expected http response code |
# Description of functions in http_proxy_api_functions.sh #
@@ -1284,6 +1332,15 @@
|--|
| None |
+# Description of functions in kube_proxy_api_functions.sh #
+
+## Function: start_kube_proxy ##
+Start the kube proxy container in kube. This proxy enabled the test env to access all services and pods in a kube cluster.
+No proxy is started if the function is called in docker mode.
+| arg list |
+|--|
+| None |
+
# Description of functions in mr_api_functions.sh #
## Function: use_mr_http ##
@@ -1612,6 +1669,23 @@
|`<count>`| And integer, 1 or greater. Specifies the number of simulators to start|
|`<interface-id>`| Shall be the interface id of the simulator. See the repo 'a1-interface' for the available ids. |
+## Function: get_kube_sim_host ##
+Translate ric name to kube host name.
+| arg list |
+|--|
+| `<ric-name>` |
+
+| parameter | description |
+| --------- | ----------- |
+| `<ric-name>` | The name of the ric to translate into a host name (ip) |
+
+## Function: generate_policy_uuid ##
+Geneate a UUID prefix to use along with the policy instance number when creating/deleting policies. Sets the env var UUID.
+UUID is then automatically added to the policy id in GET/PUT/DELETE.
+| arg list |
+|--|
+| None |
+
## Function: sim_equal ##
Tests if a variable value in the RIC simulator is equal to a target value.
Without the timeout, the test sets pass or fail immediately depending on if the variable is equal to the target or not.
diff --git a/test/common/agent_api_functions.sh b/test/common/agent_api_functions.sh
index 2f1f543..11cc145 100644
--- a/test/common/agent_api_functions.sh
+++ b/test/common/agent_api_functions.sh
@@ -19,6 +19,64 @@
# This is a script that contains management and test functions for Policy Agent
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__PA_imagesetup() {
+ __check_and_create_image_var PA "POLICY_AGENT_IMAGE" "POLICY_AGENT_IMAGE_BASE" "POLICY_AGENT_IMAGE_TAG" $1 "$POLICY_AGENT_DISPLAY_NAME"
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__PA_imagepull() {
+ __check_and_pull_image $1 "$POLICY_AGENT_DISPLAY_NAME" $POLICY_AGENT_APP_NAME $POLICY_AGENT_IMAGE
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__PA_imagebuild() {
+ echo -e $RED" Image for app PA shall never be built"$ERED
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__PA_image_data() {
+ echo -e "$POLICY_AGENT_DISPLAY_NAME\t$(docker images --format $1 $POLICY_AGENT_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__PA_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest PA
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__PA_kube_scale_zero_and_wait() {
+ __kube_scale_and_wait_all_resources $KUBE_NONRTRIC_NAMESPACE app nonrtric-policymanagementservice
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__PA_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest PA
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__PA_store_docker_logs() {
+ docker logs $POLICY_AGENT_APP_NAME > $1$2_policy-agent.log 2>&1
+}
+
+#######################################################
## Access to Policy agent
# Host name may be changed if app started by kube
@@ -228,6 +286,7 @@
export POLICY_AGENT_CONFIG_MOUNT_PATH
export POLICY_AGENT_CONFIG_FILE
export POLICY_AGENT_PKG_NAME
+ export POLICY_AGENT_DISPLAY_NAME
if [ $1 == "PROXY" ]; then
AGENT_HTTP_PROXY_CONFIG_PORT=$HTTP_PROXY_CONFIG_PORT #Set if proxy is started
@@ -249,7 +308,7 @@
envsubst < $2 > $dest_file
- __start_container $POLICY_AGENT_COMPOSE_DIR NODOCKERARGS 1 $POLICY_AGENT_APP_NAME
+ __start_container $POLICY_AGENT_COMPOSE_DIR "" NODOCKERARGS 1 $POLICY_AGENT_APP_NAME
__check_service_start $POLICY_AGENT_APP_NAME $PA_PATH$POLICY_AGENT_ALIVE_URL
fi
@@ -264,6 +323,7 @@
cp $1 $data_json
output_yaml=$PWD/tmp/pa_cfd.yaml
__kube_create_configmap $POLICY_AGENT_APP_NAME"-data" $KUBE_NONRTRIC_NAMESPACE autotest PA $data_json $output_yaml
+ echo ""
}
@@ -309,6 +369,13 @@
return
}
+# Check the agent logs for WARNINGs and ERRORs
+# args: -
+# (Function for test scripts)
+check_policy_agent_logs() {
+ __check_container_logs "Policy Agent" $POLICY_AGENT_APP_NAME $POLICY_AGENT_LOGPATH WARN ERR
+}
+
#########################################################
#### Test case functions A1 Policy management service
#########################################################
@@ -859,6 +926,13 @@
urlbase=${PA_ADAPTER}${query}
+ httpproxy="NOPROXY"
+ if [ $RUNMODE == "KUBE" ]; then
+ if [ ! -z "$CLUSTER_KUBE_PROXY_NODEPORT" ]; then
+ httpproxy="http://localhost:$CLUSTER_KUBE_PROXY_NODEPORT"
+ fi
+ fi
+
for ((i=1; i<=$pids; i++))
do
uuid=$UUID
@@ -867,9 +941,9 @@
fi
echo "" > "./tmp/.pid${i}.res.txt"
if [ "$PMS_VERSION" == "V2" ]; then
- echo $resp_code $urlbase $ric_base $num_rics $uuid $start_id $serv $type $transient $noti $template $count $pids $i > "./tmp/.pid${i}.txt"
+ echo $resp_code $urlbase $ric_base $num_rics $uuid $start_id $serv $type $transient $noti $template $count $pids $i $httpproxy > "./tmp/.pid${i}.txt"
else
- echo $resp_code $urlbase $ric_base $num_rics $uuid $start_id $template $count $pids $i > "./tmp/.pid${i}.txt"
+ echo $resp_code $urlbase $ric_base $num_rics $uuid $start_id $template $count $pids $i $httpproxy > "./tmp/.pid${i}.txt"
fi
echo $i
done | xargs -n 1 -I{} -P $pids bash -c '{
@@ -1047,6 +1121,13 @@
urlbase=${PA_ADAPTER}${query}
+ httpproxy="NOPROXY"
+ if [ $RUNMODE == "KUBE" ]; then
+ if [ ! -z "$CLUSTER_KUBE_PROXY_NODEPORT" ]; then
+ httpproxy="http://localhost:$CLUSTER_KUBE_PROXY_NODEPORT"
+ fi
+ fi
+
for ((i=1; i<=$pids; i++))
do
uuid=$UUID
@@ -1054,7 +1135,7 @@
uuid="NOUUID"
fi
echo "" > "./tmp/.pid${i}.del.res.txt"
- echo $resp_code $urlbase $num_rics $uuid $start_id $count $pids $i > "./tmp/.pid${i}.del.txt"
+ echo $resp_code $urlbase $num_rics $uuid $start_id $count $pids $i $httpproxy> "./tmp/.pid${i}.del.txt"
echo $i
done | xargs -n 1 -I{} -P $pids bash -c '{
arg=$(echo {})
diff --git a/test/common/api_curl.sh b/test/common/api_curl.sh
index 1ea47dd..daa2c29 100644
--- a/test/common/api_curl.sh
+++ b/test/common/api_curl.sh
@@ -28,6 +28,12 @@
__do_curl_to_api() {
TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S")
echo " (${BASH_LINENO[0]}) - ${TIMESTAMP}: ${FUNCNAME[0]}" $@ >> $HTTPLOG
+ proxyflag=""
+ if [ $RUNMODE == "KUBE" ]; then
+ if [ ! -z "$CLUSTER_KUBE_PROXY_NODEPORT" ]; then
+ proxyflag=" --proxy http://localhost:$CLUSTER_KUBE_PROXY_NODEPORT"
+ fi
+ fi
paramError=0
input_url=$3
if [ $# -gt 0 ]; then
@@ -50,6 +56,10 @@
__ADAPTER=$RC_ADAPTER
__ADAPTER_TYPE=$RC_ADAPTER_TYPE
__RETRY_CODES=""
+ elif [ $1 == "NGW" ]; then
+ __ADAPTER=$NGW_ADAPTER
+ __ADAPTER_TYPE=$NGW_ADAPTER_TYPE
+ __RETRY_CODES=""
else
paramError=1
fi
@@ -125,7 +135,7 @@
if [ $__ADAPTER_TYPE == "REST" ]; then
url=" "${__ADAPTER}${input_url}
oper=" -X "$oper
- curlString="curl -k "${oper}${timeout}${httpcode}${accept}${content}${url}${file}
+ curlString="curl -k $proxyflag "${oper}${timeout}${httpcode}${accept}${content}${url}${file}
echo " CMD: "$curlString >> $HTTPLOG
if [ $# -eq 4 ]; then
echo " FILE: $(<$4)" >> $HTTPLOG
@@ -174,7 +184,7 @@
#urlencode the request url since it will be carried by send-request url
requestUrl=$(python3 -c "from __future__ import print_function; import urllib.parse, sys; print(urllib.parse.quote(sys.argv[1]))" "$input_url")
url=" "${__ADAPTER}"/send-request?url="${requestUrl}"&operation="${oper}
- curlString="curl -k -X POST${timeout}${httpcode}${content}${url}${file}"
+ curlString="curl -k $proxyflag -X POST${timeout}${httpcode}${content}${url}${file}"
echo " CMD: "$curlString >> $HTTPLOG
res=$($curlString)
retcode=$?
@@ -200,7 +210,7 @@
cid=$3
fi
url=" "${__ADAPTER}"/receive-response?correlationid="${cid}
- curlString="curl -k -X GET"${timeout}${httpcode}${url}
+ curlString="curl -k $proxyflag -X GET"${timeout}${httpcode}${url}
echo " CMD: "$curlString >> $HTTPLOG
res=$($curlString)
retcode=$?
diff --git a/test/common/clean_kube.sh b/test/common/clean_kube.sh
index ea17eee..8e2f676 100755
--- a/test/common/clean_kube.sh
+++ b/test/common/clean_kube.sh
@@ -31,7 +31,7 @@
__kube_delete_all_resources() {
echo "Deleting all from namespace: "$1
namespace=$1
- resources="deployments replicaset statefulset services pods configmaps pvc"
+ resources="deployments replicaset statefulset services pods configmaps pvc pv"
deleted_resourcetypes=""
for restype in $resources; do
result=$(kubectl get $restype -n $namespace -o jsonpath='{.items[?(@.metadata.labels.autotest)].metadata.name}')
diff --git a/test/common/consul_cbs_functions.sh b/test/common/consul_cbs_functions.sh
index a3c08fa..c783a61 100644
--- a/test/common/consul_cbs_functions.sh
+++ b/test/common/consul_cbs_functions.sh
@@ -19,6 +19,124 @@
# This is a script that contains container/service management functions and test functions for Consul/CBS
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for that exist with staging, snapshot,release tags
+__CONSUL_imagesetup() {
+ __check_and_create_image_var CONSUL "CONSUL_IMAGE" "CONSUL_IMAGE_BASE" "CONSUL_IMAGE_TAG" REMOTE_PROXY "$CONSUL_DISPLAY_NAME"
+
+}
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__CBS_imagesetup() {
+ __check_and_create_image_var CBS "CBS_IMAGE" "CBS_IMAGE_BASE" "CBS_IMAGE_TAG" REMOTE_RELEASE_ONAP "$CBS_DISPLAY_NAME"
+
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__CONSUL_imagepull() {
+ __check_and_pull_image $2 "$CONSUL_DISPLAY_NAME" $CONSUL_APP_NAME $CONSUL_IMAGE
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__CBS_imagepull() {
+ __check_and_pull_image $2 "$CBS_DISPLAY_NAME" $CBS_APP_NAME $CBS_IMAGE
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__CONSUL_imagebuild() {
+ echo -e $RED" Image for app CONSUL shall never be built"$ERED
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__CBS_imagebuild() {
+ echo -e $RED" Image for app CBS shall never be built"$ERED
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__CONSUL_image_data() {
+ echo -e "$CONSUL_DISPLAY_NAME\t$(docker images --format $1 $CONSUL_IMAGE)" >> $2
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__CBS_image_data() {
+ echo -e "$CBS_DISPLAY_NAME\t$(docker images --format $1 $CBS_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__CONSUL_kube_scale_zero() {
+ echo -e $RED" Image for app CONSUL is not used in kube"$ERED
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__CBS_kube_scale_zero() {
+ echo -e $RED" Image for app CBS is not used in kube"$ERED
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__CONSUL_kube_scale_zero_and_wait() {
+ echo -e $RED" CONSUL app is not used in kube"$ERED
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__CBS_kube_scale_zero_and_wait() {
+ echo -e $RED" CBS app is not used in kube"$ERED
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__CONSUL_kube_delete_all() {
+ echo -e $RED" CONSUL app is not used in kube"$ERED
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__CBS_kube_delete_all() {
+ echo -e $RED" CBS app is not used in kube"$ERED
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__CONSUL_store_docker_logs() {
+ docker logs $CONSUL_APP_NAME > $1/$2_consul.log 2>&1
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__CBS_store_docker_logs() {
+ docker logs $CBS_APP_NAME > $1$2_cbs.log 2>&1
+ body="$(__do_curl $LOCALHOST_HTTP:$CBS_EXTERNAL_PORT/service_component_all/$POLICY_AGENT_APP_NAME)"
+ echo "$body" > $1$2_consul_config.json 2>&1
+}
+
+#######################################################
+
CONSUL_PATH="http://$LOCALHOST:$CONSUL_EXTERNAL_PORT"
####################
@@ -144,8 +262,7 @@
echo $YELLOW"Warning: No rics found for the configuration"$EYELLOW
fi
else
- rics=$(docker ps | grep $RIC_SIM_PREFIX | awk '{print $NF}')
-
+ rics=$(docker ps --filter "name=$RIC_SIM_PREFIX" --filter "network=$DOCKER_SIM_NWNAME" --filter "status=running" --format {{.Names}})
if [ $? -ne 0 ] || [ -z "$rics" ]; then
echo -e $RED" FAIL - the names of the running RIC Simulator cannot be retrieved." $ERED
((RES_CONF_FAIL++))
@@ -164,6 +281,7 @@
else
ric_id=$ric
fi
+ echo " Found a1 sim: "$ric_id
config_json=$config_json"\n \"name\": \"$ric_id\","
config_json=$config_json"\n \"baseUrl\": \"$RIC_SIM_HTTPX://$ric:$RIC_SIM_PORT\","
if [ $1 == "SDNC" ]; then
@@ -208,8 +326,10 @@
export CBS_INTERNAL_PORT
export CBS_EXTERNAL_PORT
export CONSUL_HOST
+ export CONSUL_DISPLAY_NAME
+ export CBS_DISPLAY_NAME
- __start_container $CONSUL_CBS_COMPOSE_DIR NODOCKERARGS 2 $CONSUL_APP_NAME $CBS_APP_NAME
+ __start_container $CONSUL_CBS_COMPOSE_DIR "" NODOCKERARGS 2 $CONSUL_APP_NAME $CBS_APP_NAME
__check_service_start $CONSUL_APP_NAME "http://"$LOCALHOST_NAME":"$CONSUL_EXTERNAL_PORT$CONSUL_ALIVE_URL
__check_service_start $CBS_APP_NAME "http://"$LOCALHOST_NAME":"$CBS_EXTERNAL_PORT$CBS_ALIVE_URL
diff --git a/test/common/control_panel_api_functions.sh b/test/common/control_panel_api_functions.sh
index 9f179a1..7620667 100644
--- a/test/common/control_panel_api_functions.sh
+++ b/test/common/control_panel_api_functions.sh
@@ -20,6 +20,66 @@
# This is a script that contains container/service management function
# and test functions for Control Panel
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__CP_imagesetup() {
+ __check_and_create_image_var CP "CONTROL_PANEL_IMAGE" "CONTROL_PANEL_IMAGE_BASE" "CONTROL_PANEL_IMAGE_TAG" $1 "$CONTROL_PANEL_DISPLAY_NAME"
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__CP_imagepull() {
+ __check_and_pull_image $1 "$CONTROL_PANEL_DISPLAY_NAME" $CONTROL_PANEL_APP_NAME $CONTROL_PANEL_IMAGE
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__CP_imagebuild() {
+ echo -e $RED" Image for app CP shall never be built"$ERED
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__CP_image_data() {
+ echo -e "$CONTROL_PANEL_DISPLAY_NAME\t$(docker images --format $1 $CONTROL_PANEL_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__CP_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest CP
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__CP_kube_scale_zero_and_wait() {
+ echo -e " CP replicas kept as is"
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__CP_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest CP
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__CP_store_docker_logs() {
+ docker logs $CONTROL_PANEL_APP_NAME > $1$2_control-panel.log 2>&1
+}
+
+#######################################################
+
+
## Access to control panel
# Host name may be changed if app started by kube
# Direct access from script
@@ -105,10 +165,8 @@
export CONTROL_PANEL_CONFIG_FILE
export CP_CONFIG_CONFIGMAP_NAME=$CONTROL_PANEL_APP_NAME"-config"
- export POLICY_AGENT_EXTERNAL_SECURE_PORT
- export ECS_EXTERNAL_SECURE_PORT
- export POLICY_AGENT_DOMAIN_NAME=$POLICY_AGENT_APP_NAME.$KUBE_NONRTRIC_NAMESPACE
- export ECS_DOMAIN_NAME=$ECS_APP_NAME.$KUBE_NONRTRIC_NAMESPACE
+ export NGW_DOMAIN_NAME=$NRT_GATEWAY_APP_NAME.$KUBE_NONRTRIC_NAMESPACE
+ export NRT_GATEWAY_EXTERNAL_PORT
#Check if nonrtric namespace exists, if not create it
__kube_create_namespace $KUBE_NONRTRIC_NAMESPACE
@@ -116,7 +174,13 @@
# Create config map for config
datafile=$PWD/tmp/$CONTROL_PANEL_CONFIG_FILE
#Add config to properties file
+
+ #Trick to prevent these two vars to be replace with space in the config file by cmd envsubst
+ export upstream='$upstream'
+ export uri='$uri'
+
envsubst < $1 > $datafile
+
output_yaml=$PWD/tmp/cp_cfc.yaml
__kube_create_configmap $CP_CONFIG_CONFIGMAP_NAME $KUBE_NONRTRIC_NAMESPACE autotest CP $datafile $output_yaml
@@ -163,10 +227,32 @@
export CONTROL_PANEL_EXTERNAL_SECURE_PORT
export DOCKER_SIM_NWNAME
- __start_container $CONTROL_PANEL_COMPOSE_DIR NODOCKERARGS 1 $CONTROL_PANEL_APP_NAME
+ export CONTROL_PANEL_HOST_MNT_DIR
+ export CONTROL_PANEL_CONFIG_FILE
+ export CONTROL_PANEL_CONFIG_MOUNT_PATH
+
+ export NRT_GATEWAY_APP_NAME
+ export NRT_GATEWAY_EXTERNAL_PORT
+
+ export POLICY_AGENT_EXTERNAL_SECURE_PORT
+ export ECS_EXTERNAL_SECURE_PORT
+ export POLICY_AGENT_DOMAIN_NAME=$POLICY_AGENT_APP_NAME
+ export ECS_DOMAIN_NAME=$ECS_APP_NAME
+
+ export CONTROL_PANEL_HOST_MNT_DIR
+ export CONTROL_PANEL_CONFIG_MOUNT_PATH
+ export CONTROL_PANEL_CONFIG_FILE
+ export CONTROL_PANEL_DISPLAY_NAME
+ export NGW_DOMAIN_NAME=$NRT_GATEWAY_APP_NAME
+
+ dest_file=$SIM_GROUP/$CONTROL_PANEL_COMPOSE_DIR/$CONTROL_PANEL_HOST_MNT_DIR/$CONTROL_PANEL_CONFIG_FILE
+
+ envsubst '${NGW_DOMAIN_NAME},${NRT_GATEWAY_EXTERNAL_PORT},${POLICY_AGENT_EXTERNAL_SECURE_PORT},${ECS_EXTERNAL_SECURE_PORT},${POLICY_AGENT_DOMAIN_NAME},${ECS_DOMAIN_NAME}' < $1 > $dest_file
+ #envsubst < $1 > $dest_file
+
+ __start_container $CONTROL_PANEL_COMPOSE_DIR "" NODOCKERARGS 1 $CONTROL_PANEL_APP_NAME
__check_service_start $CONTROL_PANEL_APP_NAME $CP_PATH$CONTROL_PANEL_ALIVE_URL
fi
echo ""
}
-
diff --git a/test/common/controller_api_functions.sh b/test/common/controller_api_functions.sh
index 9e60175..d405501 100644
--- a/test/common/controller_api_functions.sh
+++ b/test/common/controller_api_functions.sh
@@ -19,6 +19,78 @@
# This is a script that contains container/service management functions and test functions for A1 Controller API
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__SDNC_imagesetup() {
+
+ sdnc_suffix_tag=$1
+
+ for oia_name in $ONAP_IMAGES_APP_NAMES; do
+ if [ "$oia_name" == "SDNC" ]; then
+ sdnc_suffix_tag="REMOTE_RELEASE_ONAP"
+ fi
+ done
+ __check_and_create_image_var SDNC "SDNC_A1_CONTROLLER_IMAGE" "SDNC_A1_CONTROLLER_IMAGE_BASE" "SDNC_A1_CONTROLLER_IMAGE_TAG" $sdnc_suffix_tag "$SDNC_DISPLAY_NAME"
+ __check_and_create_image_var SDNC "SDNC_DB_IMAGE" "SDNC_DB_IMAGE_BASE" "SDNC_DB_IMAGE_TAG" REMOTE_PROXY "SDNC DB"
+
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__SDNC_imagepull() {
+ __check_and_pull_image $1 "$SDNC_DISPLAY_NAME" $SDNC_APP_NAME $SDNC_A1_CONTROLLER_IMAGE
+ __check_and_pull_image $2 "SDNC DB" $SDNC_APP_NAME $SDNC_DB_IMAGE
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__SDNC_imagebuild() {
+ echo -e $RED" Image for app SDNC shall never be built"$ERED
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__SDNC_image_data() {
+ echo -e "$SDNC_DISPLAY_NAME\t$(docker images --format $1 $SDNC_A1_CONTROLLER_IMAGE)" >> $2
+ echo -e "SDNC DB\t$(docker images --format $1 $SDNC_DB_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__SDNC_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest SDNC
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__SDNC_kube_scale_zero_and_wait() {
+ echo -e " SDNC replicas kept as is"
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__SDNC_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest SDNC
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__SDNC_store_docker_logs() {
+ docker exec -t $SDNC_APP_NAME cat $SDNC_KARAF_LOG> $1$2_SDNC_karaf.log 2>&1
+}
+
+#######################################################
+
+
SDNC_HTTPX="http"
SDNC_HOST_NAME=$LOCALHOST_NAME
SDNC_PATH=$SDNC_HTTPX"://"$SDNC_HOST_NAME":"$SDNC_EXTERNAL_PORT
@@ -125,6 +197,8 @@
export SDNC_A1_TRUSTSTORE_PASSWORD
export SDNC_DB_APP_NAME
export SDNC_DB_IMAGE
+ export SDNC_USER
+ export SDNC_PWD
# Create service
input_yaml=$SIM_GROUP"/"$SDNC_COMPOSE_DIR"/"svc.yaml
@@ -132,7 +206,7 @@
__kube_create_instance service $SDNC_APP_NAME $input_yaml $output_yaml
# Create app
- input_yaml=$SIM_GROUP"/"$SDNC_COMPOSE_DIR"/"app.yaml
+ input_yaml=$SIM_GROUP"/"$SDNC_COMPOSE_DIR"/"$SDNC_KUBE_APP_FILE
output_yaml=$PWD/tmp/sdnc_app.yaml
__kube_create_instance app $SDNC_APP_NAME $input_yaml $output_yaml
@@ -179,8 +253,11 @@
export SDNC_EXTERNAL_SECURE_PORT
export SDNC_A1_TRUSTSTORE_PASSWORD
export DOCKER_SIM_NWNAME
+ export SDNC_DISPLAY_NAME
+ export SDNC_USER
+ export SDNC_PWD
- __start_container $SDNC_COMPOSE_DIR NODOCKERARGS 1 $SDNC_APP_NAME
+ __start_container $SDNC_COMPOSE_DIR $SDNC_COMPOSE_FILE NODOCKERARGS 1 $SDNC_APP_NAME
__check_service_start $SDNC_APP_NAME $SDNC_PATH$SDNC_ALIVE_URL
fi
@@ -188,7 +265,12 @@
return 0
}
-
+# Check the agent logs for WARNINGs and ERRORs
+# args: -
+# (Function for test scripts)
+check_sdnc_logs() {
+ __check_container_logs "SDNC A1 Controller" $SDNC_APP_NAME $SDNC_KARAF_LOG WARN ERROR
+}
# Generic function to query the RICs via the A1-controller API.
# args: <operation> <url> [<body>]
@@ -215,7 +297,13 @@
payload="./tmp/.sdnc.payload.json"
echo "$json" > $payload
echo " FILE ($payload) : $json" >> $HTTPLOG
- curlString="curl -skw %{http_code} -X POST $SDNC_API_PATH$1 -H accept:application/json -H Content-Type:application/json --data-binary @$payload"
+ proxyflag=""
+ if [ $RUNMODE == "KUBE" ]; then
+ if [ ! -z "$CLUSTER_KUBE_PROXY_NODEPORT" ]; then
+ proxyflag=" --proxy http://localhost:$CLUSTER_KUBE_PROXY_NODEPORT"
+ fi
+ fi
+ curlString="curl -skw %{http_code} $proxyflag -X POST $SDNC_API_PATH$1 -H accept:application/json -H Content-Type:application/json --data-binary @$payload"
echo " CMD: "$curlString >> $HTTPLOG
res=$($curlString)
retcode=$?
@@ -236,7 +324,7 @@
echo " JSON: "$body >> $HTTPLOG
reply="./tmp/.sdnc-reply.json"
echo "$body" > $reply
- res=$(python3 ../common/extract_sdnc_reply.py $reply)
+ res=$(python3 ../common/extract_sdnc_reply.py $SDNC_RESPONSE_JSON_KEY $reply)
echo " EXTRACED BODY+CODE: "$res >> $HTTPLOG
echo "$res"
return 0
diff --git a/test/common/cr_api_functions.sh b/test/common/cr_api_functions.sh
index bf490fc..d587c0c 100644
--- a/test/common/cr_api_functions.sh
+++ b/test/common/cr_api_functions.sh
@@ -19,6 +19,78 @@
# This is a script that contains container/service managemnt functions test functions for the Callback Reciver
+
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__CR_imagesetup() {
+ __check_and_create_image_var CR "CR_IMAGE" "CR_IMAGE_BASE" "CR_IMAGE_TAG" LOCAL "$CR_DISPLAY_NAME"
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__CR_imagepull() {
+ echo -e $RED" Image for app CR shall never be pulled from remove repo"$ERED
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__CR_imagebuild() {
+ cd ../cr
+ echo " Building CR - $CR_DISPLAY_NAME - image: $CR_IMAGE"
+ docker build --build-arg NEXUS_PROXY_REPO=$NEXUS_PROXY_REPO -t $CR_IMAGE . &> .dockererr
+ if [ $? -eq 0 ]; then
+ echo -e $GREEN" Build Ok"$EGREEN
+ else
+ echo -e $RED" Build Failed"$ERED
+ ((RES_CONF_FAIL++))
+ cat .dockererr
+ echo -e $RED"Exiting...."$ERED
+ exit 1
+ fi
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__CR_image_data() {
+ echo -e "$CR_DISPLAY_NAME\t$(docker images --format $1 $CR_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__CR_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_SIM_NAMESPACE autotest CR
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__CR_kube_scale_zero_and_wait() {
+ echo -e $RED" CR app is not scaled in this state"$ERED
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__CR_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_SIM_NAMESPACE autotest CR
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__CR_store_docker_logs() {
+ docker logs $CR_APP_NAME > $1$2_cr.log 2>&1
+}
+
+#######################################################
+
+
## Access to Callback Receiver
# Host name may be changed if app started by kube
# Direct access from script
@@ -182,8 +254,9 @@
export CR_INTERNAL_SECURE_PORT
export CR_EXTERNAL_SECURE_PORT
export DOCKER_SIM_NWNAME
+ export CR_DISPLAY_NAME
- __start_container $CR_COMPOSE_DIR NODOCKERARGS 1 $CR_APP_NAME
+ __start_container $CR_COMPOSE_DIR "" NODOCKERARGS 1 $CR_APP_NAME
__check_service_start $CR_APP_NAME $CR_PATH$CR_ALIVE_URL
fi
diff --git a/test/common/create_policies_process.py b/test/common/create_policies_process.py
index 89bfde8..b97904b 100644
--- a/test/common/create_policies_process.py
+++ b/test/common/create_policies_process.py
@@ -31,13 +31,13 @@
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-#arg responsecode baseurl ric_base num_rics uuid startid templatepath count pids pid_id
+#arg responsecode baseurl ric_base num_rics uuid startid templatepath count pids pid_id proxy
data_out=""
url_out=""
try:
- if len(sys.argv) < 11:
- print("1Expected 11/14 args, got "+str(len(sys.argv)-1))
+ if len(sys.argv) < 12:
+ print("1Expected 12/15 args, got "+str(len(sys.argv)-1))
print (sys.argv[1:])
sys.exit()
responsecode=int(sys.argv[1])
@@ -46,9 +46,10 @@
num_rics=int(sys.argv[4])
uuid=str(sys.argv[5])
start=int(sys.argv[6])
+ httpproxy="NOPROXY"
if ("/v2/" in baseurl):
- if len(sys.argv) != 15:
- print("1Expected 14 args, got "+str(len(sys.argv)-1)+ ". Args: responsecode baseurl ric_base num_rics uuid startid service type transient notification-url templatepath count pids pid_id")
+ if len(sys.argv) != 16:
+ print("1Expected 15 args, got "+str(len(sys.argv)-1)+ ". Args: responsecode baseurl ric_base num_rics uuid startid service type transient notification-url templatepath count pids pid_id proxy")
print (sys.argv[1:])
sys.exit()
@@ -60,9 +61,10 @@
count=int(sys.argv[12])
pids=int(sys.argv[13])
pid_id=int(sys.argv[14])
+ httpproxy=str(sys.argv[15])
else:
- if len(sys.argv) != 11:
- print("1Expected 10 args, got "+str(len(sys.argv)-1)+ ". Args: responsecode baseurl ric_base num_rics uuid startid templatepath count pids pid_id")
+ if len(sys.argv) != 12:
+ print("1Expected 11 args, got "+str(len(sys.argv)-1)+ ". Args: responsecode baseurl ric_base num_rics uuid startid templatepath count pids pid_id proxy")
print (sys.argv[1:])
sys.exit()
@@ -70,7 +72,14 @@
count=int(sys.argv[8])
pids=int(sys.argv[9])
pid_id=int(sys.argv[10])
+ httpproxy=str(sys.argv[11])
+ proxydict=None
+ if httpproxy != "NOPROXY":
+ proxydict = {
+ "http" : httpproxy,
+ "https" : httpproxy
+ }
if uuid == "NOUUID":
uuid=""
@@ -115,8 +124,10 @@
url=baseurl+"&id="+uuid+str(i)+"&ric="+str(ric)
url_out=url
data_out=json.dumps(json.loads(payload))
-
- resp=requests.put(url, data_out, headers=headers, verify=False, timeout=90)
+ if proxydict is None:
+ resp=requests.put(url, data_out, headers=headers, verify=False, timeout=90)
+ else:
+ resp=requests.put(url, data_out, headers=headers, verify=False, timeout=90, proxies=proxydict)
except Exception as e1:
print("1Put failed for id:"+uuid+str(i)+ ", "+str(e1) + " "+traceback.format_exc())
sys.exit()
diff --git a/test/common/delete_policies_process.py b/test/common/delete_policies_process.py
index febb3cc..4ce8bc4 100644
--- a/test/common/delete_policies_process.py
+++ b/test/common/delete_policies_process.py
@@ -30,11 +30,11 @@
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-#arg responsecode baseurl num_rics uuid startid count pids pid_id
+#arg responsecode baseurl num_rics uuid startid count pids pid_id proxy
try:
- if len(sys.argv) != 9:
- print("1Expected 8 args, got "+str(len(sys.argv)-1)+ ". Args: responsecode baseurl num_rics uuid startid count pids pid_id")
+ if len(sys.argv) != 10:
+ print("1Expected 9 args, got "+str(len(sys.argv)-1)+ ". Args: responsecode baseurl num_rics uuid startid count pids pid_id proxy")
sys.exit()
responsecode=int(sys.argv[1])
@@ -45,7 +45,14 @@
count=int(sys.argv[6])
pids=int(sys.argv[7])
pid_id=int(sys.argv[8])
+ httpproxy=str(sys.argv[9])
+ proxydict=None
+ if httpproxy != "NOPROXY":
+ proxydict = {
+ "http" : httpproxy,
+ "https" : httpproxy
+ }
if uuid == "NOUUID":
uuid=""
@@ -61,7 +68,10 @@
else:
url=str(baseurl+"?id="+uuid+str(i))
try:
- resp=requests.delete(url, verify=False, timeout=90)
+ if proxydict is None:
+ resp=requests.delete(url, verify=False, timeout=90)
+ else:
+ resp=requests.delete(url, verify=False, timeout=90, proxies=proxydict)
except Exception as e1:
print("1Delete failed for id:"+uuid+str(i)+ ", "+str(e1) + " "+traceback.format_exc())
sys.exit()
diff --git a/test/common/ecs_api_functions.sh b/test/common/ecs_api_functions.sh
index 1a6f9ac..d55f439 100644
--- a/test/common/ecs_api_functions.sh
+++ b/test/common/ecs_api_functions.sh
@@ -19,6 +19,65 @@
# This is a script that contains container/service management functions and test functions for ECS
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__ECS_imagesetup() {
+ __check_and_create_image_var ECS "ECS_IMAGE" "ECS_IMAGE_BASE" "ECS_IMAGE_TAG" $1 "$ECS_DISPLAY_NAME"
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__ECS_imagepull() {
+ __check_and_pull_image $1 "$ECS_DISPLAY_NAME" $ECS_APP_NAME $ECS_IMAGE
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__ECS_imagebuild() {
+ echo -e $RED" Image for app ECS shall never be built"$ERED
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__ECS_image_data() {
+ echo -e "$ECS_DISPLAY_NAME\t$(docker images --format $1 $ECS_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__ECS_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest ECS
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__ECS_kube_scale_zero_and_wait() {
+ __kube_scale_and_wait_all_resources $KUBE_NONRTRIC_NAMESPACE app nonrtric-enrichmentservice
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__ECS_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest ECS
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__ECS_store_docker_logs() {
+ docker logs $ECS_APP_NAME > $1$2_ecs.log 2>&1
+}
+#######################################################
+
+
## Access to ECS
# Host name may be changed if app started by kube
# Direct access
@@ -141,6 +200,10 @@
export ECS_DATA_CONFIGMAP_NAME=$ECS_APP_NAME"-data"
export ECS_CONTAINER_MNT_DIR
+ export ECS_DATA_PV_NAME=$ECS_APP_NAME"-pv"
+ #Create a unique path for the pv each time to prevent a previous volume to be reused
+ export ECS_PV_PATH="ecsdata-"$(date +%s)
+
if [ $1 == "PROXY" ]; then
ECS_HTTP_PROXY_CONFIG_PORT=$HTTP_PROXY_CONFIG_PORT #Set if proxy is started
ECS_HTTP_PROXY_CONFIG_HOST_NAME=$HTTP_PROXY_CONFIG_HOST_NAME #Set if proxy is started
@@ -163,6 +226,11 @@
output_yaml=$PWD/tmp/ecs_cfc.yaml
__kube_create_configmap $ECS_CONFIG_CONFIGMAP_NAME $KUBE_NONRTRIC_NAMESPACE autotest ECS $datafile $output_yaml
+ # Create pv
+ input_yaml=$SIM_GROUP"/"$ECS_COMPOSE_DIR"/"pv.yaml
+ output_yaml=$PWD/tmp/ecs_pv.yaml
+ __kube_create_instance pv $ECS_APP_NAME $input_yaml $output_yaml
+
# Create pvc
input_yaml=$SIM_GROUP"/"$ECS_COMPOSE_DIR"/"pvc.yaml
output_yaml=$PWD/tmp/ecs_pvc.yaml
@@ -235,6 +303,7 @@
export ECS_INTERNAL_SECURE_PORT
export ECS_EXTERNAL_SECURE_PORT
export DOCKER_SIM_NWNAME
+ export ECS_DISPLAY_NAME
if [ $1 == "PROXY" ]; then
ECS_HTTP_PROXY_CONFIG_PORT=$HTTP_PROXY_CONFIG_PORT #Set if proxy is started
@@ -256,7 +325,7 @@
envsubst < $2 > $dest_file
- __start_container $ECS_COMPOSE_DIR NODOCKERARGS 1 $ECS_APP_NAME
+ __start_container $ECS_COMPOSE_DIR "" NODOCKERARGS 1 $ECS_APP_NAME
__check_service_start $ECS_APP_NAME $ECS_PATH$ECS_ALIVE_URL
fi
@@ -324,6 +393,13 @@
return 0
}
+# Check the ecs logs for WARNINGs and ERRORs
+# args: -
+# (Function for test scripts)
+check_ecs_logs() {
+ __check_container_logs "ECS" $ECS_APP_NAME $ECS_LOGPATH WARN ERR
+}
+
# Tests if a variable value in the ECS is equal to a target value and and optional timeout.
# Arg: <variable-name> <target-value> - This test set pass or fail depending on if the variable is
@@ -360,7 +436,7 @@
return 1
fi
else
- echo -e $YELLOW"USING NOT CONFIRMED INTERFACE - FLAT URI STRUCTURE"$EYELLOW
+ echo -e $YELLOW"INTERFACE - FLAT URI STRUCTURE"$EYELLOW
# Valid number of parameters 4,5,6 etc
if [ $# -lt 3 ]; then
__print_err "<response-code> <type-id>|NOTYPE <owner-id>|NOOWNER [ EMPTY | <job-id>+ ]" $@
@@ -543,7 +619,7 @@
fi
fi
else
- echo -e $YELLOW"USING NOT CONFIRMED INTERFACE - FLAT URI STRUCTURE"$EYELLOW
+ echo -e $YELLOW"INTERFACE - FLAT URI STRUCTURE"$EYELLOW
if [ $# -lt 2 ] && [ $# -gt 4 ]; then
__print_err "<response-code> <job-id> [<status> [<timeout>]]" $@
return 1
@@ -618,7 +694,7 @@
fi
query="/A1-EI/v1/eitypes/$2/eijobs/$3"
else
- echo -e $YELLOW"USING NOT CONFIRMED INTERFACE - FLAT URI STRUCTURE"$EYELLOW
+ echo -e $YELLOW"INTERFACE - FLAT URI STRUCTURE"$EYELLOW
if [ $# -ne 2 ] && [ $# -ne 7 ]; then
__print_err "<response-code> <job-id> [<type-id> <target-url> <owner-id> <notification-url> <template-job-file>]" $@
return 1
@@ -694,7 +770,7 @@
query="/A1-EI/v1/eitypes/$2/eijobs/$3"
else
- echo -e $YELLOW"USING NOT CONFIRMED INTERFACE - FLAT URI STRUCTURE"$EYELLOW
+ echo -e $YELLOW"INTERFACE - FLAT URI STRUCTURE"$EYELLOW
if [ $# -ne 2 ]; then
__print_err "<response-code> <job-id>" $@
return 1
@@ -739,7 +815,7 @@
query="/A1-EI/v1/eitypes/$2/eijobs/$3"
else
- echo -e $YELLOW"USING NOT CONFIRMED INTERFACE - FLAT URI STRUCTURE"$EYELLOW
+ echo -e $YELLOW"INTERFACE - FLAT URI STRUCTURE"$EYELLOW
if [ $# -lt 7 ]; then
__print_err "<response-code> <job-id> <type-id> <target-url> <owner-id> <notification-url> <template-job-file>" $@
return 1
diff --git a/test/common/extract_sdnc_reply.py b/test/common/extract_sdnc_reply.py
index 9158edc..5e37aa2 100644
--- a/test/common/extract_sdnc_reply.py
+++ b/test/common/extract_sdnc_reply.py
@@ -20,12 +20,12 @@
import sys
# Extract the response code and optional response message body from and SDNC A1 Controller API reply
-
+# Args: <json-output-key> <file-containing-the response>
try:
- with open(sys.argv[1]) as json_file:
+ with open(sys.argv[2]) as json_file:
reply = json.load(json_file)
- output=reply['output']
+ output=reply[sys.argv[1]]
status=str(output['http-status'])
while(len(status) < 3):
status="0"+status
diff --git a/test/common/gateway_api_functions.sh b/test/common/gateway_api_functions.sh
new file mode 100644
index 0000000..d8f2cf1
--- /dev/null
+++ b/test/common/gateway_api_functions.sh
@@ -0,0 +1,335 @@
+#!/bin/bash
+
+# ============LICENSE_START===============================================
+# Copyright (C) 2020 Nordix Foundation. All rights reserved.
+# ========================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END=================================================
+#
+
+# This is a script that contains container/service management functions
+# for NonRTRIC Gateway
+
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__NGW_imagesetup() {
+ __check_and_create_image_var NGW "NRT_GATEWAY_IMAGE" "NRT_GATEWAY_IMAGE_BASE" "NRT_GATEWAY_IMAGE_TAG" $1 "$NRT_GATEWAY_DISPLAY_NAME"
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__NGW_imagepull() {
+ __check_and_pull_image $1 "$NRT_GATEWAY_DISPLAY_NAME" $NRT_GATEWAY_APP_NAME $NRT_GATEWAY_IMAGE
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__NGW_imagebuild() {
+ echo -e $RED"Image for app NGW shall never be built"$ERED
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__NGW_image_data() {
+ echo -e "$NRT_GATEWAY_DISPLAY_NAME\t$(docker images --format $1 $NRT_GATEWAY_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__NGW_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest NGW
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__NGW_kube_scale_zero_and_wait() {
+ echo -e " NGW replicas kept as is"
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__NGW_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest NGW
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__NGW_store_docker_logs() {
+ docker logs $NRT_GATEWAY_APP_NAME > $1$2_gateway.log 2>&1
+}
+
+#######################################################
+
+## Access to Gateway
+# Host name may be changed if app started by kube
+# Direct access from script
+NGW_HTTPX="http"
+NGW_HOST_NAME=$LOCALHOST_NAME
+NGW_PATH=$NGW_HTTPX"://"$NGW_HOST_NAME":"$NRT_GATEWAY_EXTERNAL_PORT
+# NGW_ADAPTER used for switch between REST and DMAAP (only REST supported currently)
+NGW_ADAPTER_TYPE="REST"
+NGW_ADAPTER=$NGW_PATH
+###########################
+### Gateway functions
+###########################
+
+# Set http as the protocol to use for all communication to the Gateway
+# args: -
+# (Function for test scripts)
+use_gateway_http() {
+ echo -e $BOLD"Gateway, NGW, protocol setting"$EBOLD
+ echo -e " Using $BOLD http $EBOLD towards NGW"
+ NGW_HTTPX="http"
+ NGW_PATH=$NGW_HTTPX"://"$NGW_HOST_NAME":"$NRT_GATEWAY_EXTERNAL_PORT
+ NGW_ADAPTER_TYPE="REST"
+ NGW_ADAPTER=$NGW_PATH
+ echo ""
+}
+
+# Set https as the protocol to use for all communication to the Gateway
+# args: -
+# (Function for test scripts)
+use_gateway_https() {
+ echo -e $BOLD"Gateway, NGW, protocol setting"$EBOLD
+ echo -e " Using $BOLD https $EBOLD towards NGW"
+ NGW_HTTPX="https"
+ NGW_PATH=$NGW_HTTPX"://"$NGW_HOST_NAME":"$NRT_GATEWAY_EXTERNAL_SECURE_PORT
+ NGW_ADAPTER_TYPE="REST"
+ NGW_ADAPTER=$NGW_PATH
+ echo ""
+}
+
+# Turn on debug level tracing in the gateway
+# args: -
+# (Function for test scripts)
+set_gateway_debug() {
+ echo -e $BOLD"Setting gateway debug logging"$EBOLD
+ curlString="$NGW_PATH$NRT_GATEWAY_ACTUATOR -X POST -H Content-Type:application/json -d {\"configuredLevel\":\"debug\"}"
+ result=$(__do_curl "$curlString")
+ if [ $? -ne 0 ]; then
+ __print_err "could not set debug mode" $@
+ ((RES_CONF_FAIL++))
+ return 1
+ fi
+ echo ""
+ return 0
+}
+
+# Turn on trace level tracing in the gateway
+# args: -
+# (Function for test scripts)
+set_gateway_trace() {
+ echo -e $BOLD"Setting gateway trace logging"$EBOLD
+ curlString="$NGW_PATH$NRT_GATEWAY_ACTUATOR -X POST -H Content-Type:application/json -d {\"configuredLevel\":\"trace\"}"
+ result=$(__do_curl "$curlString")
+ if [ $? -ne 0 ]; then
+ __print_err "could not set trace mode" $@
+ ((RES_CONF_FAIL++))
+ return 1
+ fi
+ echo ""
+ return 0
+}
+
+# Start the Gateway container
+# args: -
+# (Function for test scripts)
+start_gateway() {
+
+ echo -e $BOLD"Starting $NRT_GATEWAY_DISPLAY_NAME"$EBOLD
+
+ if [ $RUNMODE == "KUBE" ]; then
+
+ # Check if app shall be fully managed by the test script
+ __check_included_image "NGW"
+ retcode_i=$?
+
+ # Check if app shall only be used by the testscipt
+ __check_prestarted_image "NGW"
+ retcode_p=$?
+
+ if [ $retcode_i -ne 0 ] && [ $retcode_p -ne 0 ]; then
+ echo -e $RED"The $NRT_GATEWAY_APP_NAME app is not included as managed nor prestarted in this test script"$ERED
+ echo -e $RED"The $NRT_GATEWAY_APP_NAME will not be started"$ERED
+ exit
+ fi
+ if [ $retcode_i -eq 0 ] && [ $retcode_p -eq 0 ]; then
+ echo -e $RED"The $NRT_GATEWAY_APP_NAME app is included both as managed and prestarted in this test script"$ERED
+ echo -e $RED"The $NRT_GATEWAY_APP_NAME will not be started"$ERED
+ exit
+ fi
+
+ # Check if app shall be used - not managed - by the test script
+ __check_prestarted_image "NGW"
+ if [ $? -eq 0 ]; then
+ echo -e " Using existing $NRT_GATEWAY_APP_NAME deployment and service"
+ echo " Setting NGW replicas=1"
+ __kube_scale deployment $NRT_GATEWAY_APP_NAME $KUBE_NONRTRIC_NAMESPACE 1
+ fi
+
+ if [ $retcode_i -eq 0 ]; then
+
+ echo -e " Creating $NGW_APP_NAME app and expose service"
+
+ #Export all vars needed for service and deployment
+ export NRT_GATEWAY_APP_NAME
+ export KUBE_NONRTRIC_NAMESPACE
+ export NRT_GATEWAY_IMAGE
+ export NRT_GATEWAY_INTERNAL_PORT
+ export NRT_GATEWAY_INTERNAL_SECURE_PORT
+ export NRT_GATEWAY_EXTERNAL_PORT
+ export NRT_GATEWAY_EXTERNAL_SECURE_PORT
+ export NRT_GATEWAY_CONFIG_MOUNT_PATH
+ export NRT_GATEWAY_CONFIG_FILE
+ export NGW_CONFIG_CONFIGMAP_NAME=$NRT_GATEWAY_APP_NAME"-config"
+
+ export POLICY_AGENT_EXTERNAL_SECURE_PORT
+ export ECS_EXTERNAL_SECURE_PORT
+ export POLICY_AGENT_DOMAIN_NAME=$POLICY_AGENT_APP_NAME.$KUBE_NONRTRIC_NAMESPACE
+ export ECS_DOMAIN_NAME=$ECS_APP_NAME.$KUBE_NONRTRIC_NAMESPACE
+
+ #Check if nonrtric namespace exists, if not create it
+ __kube_create_namespace $KUBE_NONRTRIC_NAMESPACE
+
+ # Create config map for config
+ datafile=$PWD/tmp/$NRT_GATEWAY_CONFIG_FILE
+ #Add config to properties file
+ envsubst < $1 > $datafile
+ output_yaml=$PWD/tmp/ngw_cfc.yaml
+ __kube_create_configmap $NGW_CONFIG_CONFIGMAP_NAME $KUBE_NONRTRIC_NAMESPACE autotest NGW $datafile $output_yaml
+
+ # Create service
+ input_yaml=$SIM_GROUP"/"$NRT_GATEWAY_COMPOSE_DIR"/"svc.yaml
+ output_yaml=$PWD/tmp/ngw_svc.yaml
+ __kube_create_instance service $NRT_GATEWAY_APP_NAME $input_yaml $output_yaml
+
+ # Create app
+ input_yaml=$SIM_GROUP"/"$NRT_GATEWAY_COMPOSE_DIR"/"app.yaml
+ output_yaml=$PWD/tmp/ngw_app.yaml
+ __kube_create_instance app $NRT_GATEWAY_APP_NAME $input_yaml $output_yaml
+
+ fi
+
+ echo " Retrieving host and ports for service..."
+ NGW_HOST_NAME=$(__kube_get_service_host $NRT_GATEWAY_APP_NAME $KUBE_NONRTRIC_NAMESPACE)
+
+ NRT_GATEWAY_EXTERNAL_PORT=$(__kube_get_service_port $NRT_GATEWAY_APP_NAME $KUBE_NONRTRIC_NAMESPACE "http")
+ NRT_GATEWAY_EXTERNAL_SECURE_PORT=$(__kube_get_service_port $NRT_GATEWAY_APP_NAME $KUBE_NONRTRIC_NAMESPACE "https")
+
+ echo " Host IP, http port, https port: $NGW_HOST_NAME $NRT_GATEWAY_EXTERNAL_PORT $NRT_GATEWAY_EXTERNAL_SECURE_PORT"
+ if [ $NGW_HTTPX == "http" ]; then
+ NGW_PATH=$NGW_HTTPX"://"$NGW_HOST_NAME":"$NRT_GATEWAY_EXTERNAL_PORT
+ else
+ NGW_PATH=$NGW_HTTPX"://"$NGW_HOST_NAME":"$NRT_GATEWAY_EXTERNAL_SECURE_PORT
+ fi
+
+ __check_service_start $NRT_GATEWAY_APP_NAME $NGW_PATH$NRT_GATEWAY_ALIVE_URL
+
+ # Update the curl adapter if set to rest, no change if type dmaap
+ if [ $NGW_ADAPTER_TYPE == "REST" ]; then
+ NGW_ADAPTER=$NGW_PATH
+ fi
+ else
+ # Check if docker app shall be fully managed by the test script
+ __check_included_image 'NGW'
+ if [ $? -eq 1 ]; then
+ echo -e $RED"The Gateway app is not included in this test script"$ERED
+ echo -e $RED"The Gateway will not be started"$ERED
+ exit
+ fi
+
+ # Export needed vars for docker compose
+ export NRT_GATEWAY_APP_NAME
+ export NRT_GATEWAY_INTERNAL_PORT
+ export NRT_GATEWAY_EXTERNAL_PORT
+ #export NRT_GATEWAY_INTERNAL_SECURE_PORT
+ #export NRT_GATEWAY_EXTERNAL_SECURE_PORT
+
+ export DOCKER_SIM_NWNAME
+ export NRT_GATEWAY_HOST_MNT_DIR
+ export NRT_GATEWAY_CONFIG_FILE
+ export NRT_GATEWAY_CONFIG_MOUNT_PATH
+ export NRT_GATEWAY_COMPOSE_DIR
+
+ export POLICY_AGENT_DOMAIN_NAME=$POLICY_AGENT_APP_NAME
+ export POLICY_AGENT_EXTERNAL_SECURE_PORT
+ export ECS_DOMAIN_NAME=$ECS_APP_NAME
+ export ECS_EXTERNAL_SECURE_PORT
+
+ export NRT_GATEWAY_DISPLAY_NAME
+
+ dest_file=$SIM_GROUP/$NRT_GATEWAY_COMPOSE_DIR/$NRT_GATEWAY_HOST_MNT_DIR/$NRT_GATEWAY_CONFIG_FILE
+
+ envsubst < $1 > $dest_file
+
+ __start_container $NRT_GATEWAY_COMPOSE_DIR "" NODOCKERARGS 1 $NRT_GATEWAY_APP_NAME
+
+ __check_service_start $NRT_GATEWAY_APP_NAME $NGW_PATH$NRT_GATEWAY_ALIVE_URL
+ fi
+ echo ""
+}
+
+
+# API Test function: V2 GET /status towards PMS
+# args: <response-code>
+# (Function for test scripts)
+gateway_pms_get_status() {
+ __log_test_start $@
+ if [ $# -ne 1 ]; then
+ __print_err "<response-code>" $@
+ return 1
+ fi
+ query=$PMS_API_PREFIX"/v2/status"
+ res="$(__do_curl_to_api NGW GET $query)"
+ status=${res:${#res}-3}
+
+ if [ $status -ne $1 ]; then
+ __log_test_fail_status_code $1 $status
+ return 1
+ fi
+
+ __log_test_pass
+ return 0
+}
+
+# API Test function: GET /ei-producer/v1/eitypes towards ECS
+# Note: This is just to test service response
+# args: <response-code>
+# (Function for test scripts)
+gateway_ecs_get_types() {
+ __log_test_start $@
+ if [ $# -ne 1 ]; then
+ __print_err "<response-code>" $@
+ return 1
+ fi
+ query="/ei-producer/v1/eitypes"
+ res="$(__do_curl_to_api NGW GET $query)"
+ status=${res:${#res}-3}
+
+ if [ $status -ne $1 ]; then
+ __log_test_fail_status_code $1 $status
+ return 1
+ fi
+
+ __log_test_pass
+ return 0
+}
\ No newline at end of file
diff --git a/test/common/http_proxy_api_functions.sh b/test/common/http_proxy_api_functions.sh
index 68df929..02ccb92 100644
--- a/test/common/http_proxy_api_functions.sh
+++ b/test/common/http_proxy_api_functions.sh
@@ -19,6 +19,66 @@
# This is a script that contains container/service managemnt functions for Http Proxy
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__HTTPPROXY_imagesetup() {
+ __check_and_create_image_var HTTPPROXY "HTTP_PROXY_IMAGE" "HTTP_PROXY_IMAGE_BASE" "HTTP_PROXY_IMAGE_TAG" REMOTE_PROXY "$HTTP_PROXY_DISPLAY_NAME"
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__HTTPPROXY_imagepull() {
+ __check_and_pull_image $2 "$HTTP_PROXY_DISPLAY_NAME" $HTTP_PROXY_APP_NAME $HTTP_PROXY_IMAGE
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__HTTPPROXY_imagebuild() {
+ echo -e $RED"Image for app HTTPPROXY shall never be built"$ERED
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__HTTPPROXY_image_data() {
+ echo -e "$HTTP_PROXY_DISPLAY_NAME\t$(docker images --format $1 $HTTP_PROXY_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__HTTPPROXY_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_SIM_NAMESPACE autotest HTTPPROXY
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__HTTPPROXY_kube_scale_zero_and_wait() {
+ echo -e $RED" NGW replicas kept as is"$ERED
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__HTTPPROXY_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_SIM_NAMESPACE autotest HTTPPROXY
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__HTTPPROXY_store_docker_logs() {
+ docker logs $HTTP_PROXY_APP_NAME > $1$2_httpproxy.log 2>&1
+}
+
+#######################################################
+
+
## Access to Http Proxy Receiver
# Host name may be changed if app started by kube
# Direct access from script
@@ -116,7 +176,9 @@
export HTTP_PROXY_WEB_INTERNAL_PORT
export DOCKER_SIM_NWNAME
- __start_container $HTTP_PROXY_COMPOSE_DIR NODOCKERARGS 1 $HTTP_PROXY_APP_NAME
+ export HTTP_PROXY_DISPLAY_NAME
+
+ __start_container $HTTP_PROXY_COMPOSE_DIR "" NODOCKERARGS 1 $HTTP_PROXY_APP_NAME
__check_service_start $HTTP_PROXY_APP_NAME $HTTP_PROXY_PATH$HTTP_PROXY_ALIVE_URL
diff --git a/test/common/kube_proxy_api_functions.sh b/test/common/kube_proxy_api_functions.sh
new file mode 100644
index 0000000..fc9764d
--- /dev/null
+++ b/test/common/kube_proxy_api_functions.sh
@@ -0,0 +1,181 @@
+#!/bin/bash
+
+# ============LICENSE_START===============================================
+# Copyright (C) 2020 Nordix Foundation. All rights reserved.
+# ========================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END=================================================
+#
+
+# This is a script that contains container/service managemnt functions for Kube Http Proxy
+# This http proxy is to provide full access for the test script to all adressable kube object in a clister
+
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__KUBEPROXY_imagesetup() {
+ __check_and_create_image_var KUBEPROXY "KUBE_PROXY_IMAGE" "KUBE_PROXY_IMAGE_BASE" "KUBE_PROXY_IMAGE_TAG" REMOTE_PROXY "$KUBE_PROXY_DISPLAY_NAME"
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__KUBEPROXY_imagepull() {
+ __check_and_pull_image $2 "$KUBE_PROXY_DISPLAY_NAME" $KUBE_PROXY_APP_NAME $KUBE_PROXY_IMAGE
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__KUBEPROXY_imagebuild() {
+ echo -e $RED"Image for app KUBEPROXY shall never be built"$ERED
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__KUBEPROXY_image_data() {
+ echo -e "$KUBE_PROXY_DISPLAY_NAME\t$(docker images --format $1 $KUBE_PROXY_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__KUBEPROXY_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_SIM_NAMESPACE autotest KUBEPROXY
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__KUBEPROXY_kube_scale_zero_and_wait() {
+ echo -e $RED" Http proxy replicas kept as is"$ERED
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__KUBEPROXY_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_SIM_NAMESPACE autotest KUBEPROXY
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__KUBEPROXY_store_docker_logs() {
+ docker logs $KUBE_PROXY_APP_NAME > $1$2_kubeproxy.log 2>&1
+}
+
+#######################################################
+
+
+## Access to Kube Http Proxy
+# Host name may be changed if app started by kube
+# Direct access from script
+KUBE_PROXY_HTTPX="http"
+KUBE_PROXY_HOST_NAME=$LOCALHOST_NAME
+KUBE_PROXY_PATH=$KUBE_PROXY_HTTPX"://"$KUBE_PROXY_HOST_NAME":"$KUBE_PROXY_WEB_EXTERNAL_PORT
+
+#########################
+### Http Proxy functions
+#########################
+
+# Start the Kube Http Proxy in the simulator group
+# args: -
+# (Function for test scripts)
+start_kube_proxy() {
+
+ echo -e $BOLD"Starting $KUBE_PROXY_DISPLAY_NAME"$EBOLD
+
+ if [ $RUNMODE == "KUBE" ]; then
+
+ # Check if app shall be fully managed by the test script
+ __check_included_image "KUBEPROXY"
+ retcode_i=$?
+
+ # Check if app shall only be used by the testscipt
+ __check_prestarted_image "KUBEPROXY"
+ retcode_p=$?
+
+ if [ $retcode_i -ne 0 ] && [ $retcode_p -ne 0 ]; then
+ echo -e $RED"The $KUBE_PROXY_APP_NAME app is not included as managed nor prestarted in this test script"$ERED
+ echo -e $RED"The $KUBE_PROXY_APP_NAME will not be started"$ERED
+ exit
+ fi
+ if [ $retcode_i -eq 0 ] && [ $retcode_p -eq 0 ]; then
+ echo -e $RED"The $KUBE_PROXY_APP_NAME app is included both as managed and prestarted in this test script"$ERED
+ echo -e $RED"The $KUBE_PROXY_APP_NAME will not be started"$ERED
+ exit
+ fi
+
+ # Check if app shall be used - not managed - by the test script
+ if [ $retcode_p -eq 0 ]; then
+ echo -e " Using existing $KUBE_PROXY_APP_NAME deployment and service"
+ echo " Setting KUBEPROXY replicas=1"
+ __kube_scale deployment $KUBE_PROXY_APP_NAME $KUBE_SIM_NAMESPACE 1
+ fi
+
+ if [ $retcode_i -eq 0 ]; then
+ echo -e " Creating $KUBE_PROXY_APP_NAME deployment and service"
+ export KUBE_PROXY_APP_NAME
+ export KUBE_PROXY_WEB_EXTERNAL_PORT
+ export KUBE_PROXY_WEB_INTERNAL_PORT
+ export KUBE_PROXY_EXTERNAL_PORT
+ export KUBE_PROXY_INTERNAL_PORT
+ export KUBE_SIM_NAMESPACE
+ export KUBE_PROXY_IMAGE
+
+ __kube_create_namespace $KUBE_SIM_NAMESPACE
+
+ # Create service
+ input_yaml=$SIM_GROUP"/"$KUBE_PROXY_COMPOSE_DIR"/"svc.yaml
+ output_yaml=$PWD/tmp/proxy_svc.yaml
+ __kube_create_instance service $KUBE_PROXY_APP_NAME $input_yaml $output_yaml
+
+ # Create app
+ input_yaml=$SIM_GROUP"/"$KUBE_PROXY_COMPOSE_DIR"/"app.yaml
+ output_yaml=$PWD/tmp/proxy_app.yaml
+ __kube_create_instance app $KUBE_PROXY_APP_NAME $input_yaml $output_yaml
+
+ fi
+
+ echo " Retrieving host and ports for service..."
+ KUBE_PROXY_HOST_NAME=$(__kube_get_service_host $KUBE_PROXY_APP_NAME $KUBE_SIM_NAMESPACE)
+ KUBE_PROXY_WEB_EXTERNAL_PORT=$(__kube_get_service_port $KUBE_PROXY_APP_NAME $KUBE_SIM_NAMESPACE "web")
+ KUBE_PROXY_EXTERNAL_PORT=$(__kube_get_service_port $KUBE_PROXY_APP_NAME $KUBE_SIM_NAMESPACE "http")
+
+
+ minikube status > /dev/null
+ if [ $? -eq 0 ]; then
+ echo -e $GREEN" Running minikube inside a kubernetes cluster. No proxy needed for the test script to access services"$EGREEN
+ export CLUSTER_KUBE_PROXY_NODEPORT=""
+ else
+ echo -e $YELLOW" Running outside the kubernetes cluster. Proxy is setup to access services from the test script"$EYELLOW
+ export CLUSTER_KUBE_PROXY_NODEPORT=$(__kube_get_service_nodeport $KUBE_PROXY_APP_NAME $KUBE_SIM_NAMESPACE "http")
+ fi
+
+ KUBE_PROXY_PATH=$KUBE_PROXY_HTTPX"://"$KUBE_PROXY_HOST_NAME":"$KUBE_PROXY_WEB_EXTERNAL_PORT
+ KUBE_PROXY_CONFIG_PORT=$KUBE_PROXY_EXTERNAL_PORT
+ KUBE_PROXY_CONFIG_HOST_NAME=$KUBE_PROXY_APP_NAME"."$KUBE_SIM_NAMESPACE
+
+ echo " Host IP, http port, cluster http nodeport (may be empty): $KUBE_PROXY_HOST_NAME $KUBE_PROXY_WEB_EXTERNAL_PORT $CLUSTER_KUBE_PROXY_NODEPORT"
+
+ __check_service_start $KUBE_PROXY_APP_NAME $KUBE_PROXY_PATH$KUBE_PROXY_ALIVE_URL
+
+ else
+ echo $YELLOW" Kube http proxy not needed in docker test. App not started"
+ fi
+ echo ""
+}
+
diff --git a/test/common/mr_api_functions.sh b/test/common/mr_api_functions.sh
index 54809b7..0d44bdf 100644
--- a/test/common/mr_api_functions.sh
+++ b/test/common/mr_api_functions.sh
@@ -20,6 +20,139 @@
# This is a script that contains container/service management function
# and test functions for Message Router - mr stub
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__MR_imagesetup() {
+ __check_and_create_image_var MR "MRSTUB_IMAGE" "MRSTUB_IMAGE_BASE" "MRSTUB_IMAGE_TAG" LOCAL "$MR_STUB_DISPLAY_NAME"
+}
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__DMAAPMR_imagesetup() {
+ __check_and_create_image_var DMAAPMR "ONAP_DMAAPMR_IMAGE" "ONAP_DMAAPMR_IMAGE_BASE" "ONAP_DMAAPMR_IMAGE_TAG" REMOTE_RELEASE_ONAP "DMAAP Message Router"
+ __check_and_create_image_var DMAAPMR "ONAP_ZOOKEEPER_IMAGE" "ONAP_ZOOKEEPER_IMAGE_BASE" "ONAP_ZOOKEEPER_IMAGE_TAG" REMOTE_RELEASE_ONAP "ZooKeeper"
+ __check_and_create_image_var DMAAPMR "ONAP_KAFKA_IMAGE" "ONAP_KAFKA_IMAGE_BASE" "ONAP_KAFKA_IMAGE_TAG" REMOTE_RELEASE_ONAP "Kafka"
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__MR_imagepull() {
+ echo -e $RED"Image for app CR shall never be pulled from remove repo"$ERED
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released (remote) images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__DMAAPMR_imagepull() {
+ __check_and_pull_image $2 "DMAAP Message Router" $MR_DMAAP_APP_NAME $ONAP_DMAAPMR_IMAGE
+ __check_and_pull_image $2 "ZooKeeper" $MR_ZOOKEEPER_APP_NAME $ONAP_ZOOKEEPER_IMAGE
+ __check_and_pull_image $2 "Kafka" $MR_KAFKA_APP_NAME $ONAP_KAFKA_IMAGE
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__MR_imagebuild() {
+ cd ../mrstub
+ echo " Building MR - $MR_STUB_DISPLAY_NAME - image: $MRSTUB_IMAGE"
+ docker build --build-arg NEXUS_PROXY_REPO=$NEXUS_PROXY_REPO -t $MRSTUB_IMAGE . &> .dockererr
+ if [ $? -eq 0 ]; then
+ echo -e $GREEN" Build Ok"$EGREEN
+ else
+ echo -e $RED" Build Failed"$ERED
+ ((RES_CONF_FAIL++))
+ cat .dockererr
+ echo -e $RED"Exiting...."$ERED
+ exit 1
+ fi
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__DMAAPMR_imagebuild() {
+ echo -e $RED"Image for app DMAAPMR shall never be built"$ERED
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__MR_image_data() {
+ echo -e "$MR_STUB_DISPLAY_NAME\t$(docker images --format $1 $MRSTUB_IMAGE)" >> $2
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__DMAAPMR_image_data() {
+ echo -e "DMAAP Message Router\t$(docker images --format $1 $ONAP_DMAAPMR_IMAGE)" >> $2
+ echo -e "ZooKeeper\t$(docker images --format $1 $ONAP_ZOOKEEPER_IMAGE)" >> $2
+ echo -e "Kafka\t$(docker images --format $1 $ONAP_KAFKA_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__MR_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_ONAP_NAMESPACE autotest MR
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__DMAAPMR_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_ONAP_NAMESPACE autotest DMAAPMR
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__MR_kube_scale_zero_and_wait() {
+ echo -e " MR replicas kept as is"
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__DMAAPMR_kube_scale_zero_and_wait() {
+ echo -e " DMAAP replicas kept as is"
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__MR_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_ONAP_NAMESPACE autotest MR
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__DMAAPMR_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_ONAP_NAMESPACE autotest DMAAPMR
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__MR_store_docker_logs() {
+ docker logs $MR_STUB_APP_NAME > $1$2_mr_stub.log 2>&1
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__DMAAPMR_store_docker_logs() {
+ docker logs $MR_DMAAP_APP_NAME > $1$2mr.log 2>&1
+ docker logs $MR_KAFKA_APP_NAME > $1$2_mr_kafka.log 2>&1
+ docker logs $MR_ZOOKEEPER_APP_NAME > $1$2_mr_zookeeper.log 2>&1
+}
+
+#######################################################
+
## Access to Message Router
# Host name may be changed if app started by kube
# Direct access from script
@@ -420,7 +553,7 @@
export MR_INTERNAL_SECURE_PORT
if [ $retcode_dmaapmr -eq 0 ]; then
- __start_container $MR_DMAAP_COMPOSE_DIR NODOCKERARGS 1 $MR_DMAAP_APP_NAME
+ __start_container $MR_DMAAP_COMPOSE_DIR "" NODOCKERARGS 1 $MR_DMAAP_APP_NAME
__check_service_start $MR_DMAAP_APP_NAME $MR_DMAAP_PATH$MR_DMAAP_ALIVE_URL
@@ -447,9 +580,10 @@
export MR_STUB_LOCALHOST_PORT
export MR_STUB_LOCALHOST_SECURE_PORT
export MR_STUB_CERT_MOUNT_DIR
+ export MR_STUB_DISPLAY_NAME
if [ $retcode_mr -eq 0 ]; then
- __start_container $MR_STUB_COMPOSE_DIR NODOCKERARGS 1 $MR_STUB_APP_NAME
+ __start_container $MR_STUB_COMPOSE_DIR "" NODOCKERARGS 1 $MR_STUB_APP_NAME
__check_service_start $MR_STUB_APP_NAME $MR_STUB_PATH$MR_STUB_ALIVE_URL
fi
diff --git a/test/common/prodstub_api_functions.sh b/test/common/prodstub_api_functions.sh
index 098e58f..4d88659 100644
--- a/test/common/prodstub_api_functions.sh
+++ b/test/common/prodstub_api_functions.sh
@@ -19,6 +19,77 @@
# This is a script that contains container/service management functions and test functions for Producer stub
+
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__PRODSTUB_imagesetup() {
+ __check_and_create_image_var PRODSTUB "PROD_STUB_IMAGE" "PROD_STUB_IMAGE_BASE" "PROD_STUB_IMAGE_TAG" LOCAL "$PROD_STUB_DISPLAY_NAME"
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__PRODSTUB_imagepull() {
+ echo -e $RED"Image for app PRODSTUB shall never be pulled from remove repo"$ERED
+}
+
+# Build image (only for simulator or interfaces stubs owned by the test environment)
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__PRODSTUB_imagebuild() {
+ cd ../prodstub
+ echo " Building PRODSTUB - $PROD_STUB_DISPLAY_NAME - image: $PROD_STUB_IMAGE"
+ docker build --build-arg NEXUS_PROXY_REPO=$NEXUS_PROXY_REPO -t $PROD_STUB_IMAGE . &> .dockererr
+ if [ $? -eq 0 ]; then
+ echo -e $GREEN" Build Ok"$EGREEN
+ else
+ echo -e $RED" Build Failed"$ERED
+ ((RES_CONF_FAIL++))
+ cat .dockererr
+ echo -e $RED"Exiting...."$ERED
+ exit 1
+ fi
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__PRODSTUB_image_data() {
+ echo -e "$PROD_STUB_DISPLAY_NAME\t$(docker images --format $1 $PROD_STUB_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__PRODSTUB_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_SIM_NAMESPACE autotest PRODSTUB
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__PRODSTUB_kube_scale_zero_and_wait() {
+ echo -e $RED" PRODSTUB app is not scaled in this state"$ERED
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__PRODSTUB_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_SIM_NAMESPACE autotest PRODSTUB
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__PRODSTUB_store_docker_logs() {
+ docker logs $PROD_STUB_APP_NAME > $1$2_prodstub.log 2>&1
+}
+#######################################################
+
+
## Access to Prod stub sim
# Direct access
PROD_STUB_HTTPX="http"
@@ -173,7 +244,9 @@
export PROD_STUB_EXTERNAL_SECURE_PORT
export DOCKER_SIM_NWNAME
- __start_container $PROD_STUB_COMPOSE_DIR NODOCKERARGS 1 $PROD_STUB_APP_NAME
+ export PROD_STUB_DISPLAY_NAME
+
+ __start_container $PROD_STUB_COMPOSE_DIR "" NODOCKERARGS 1 $PROD_STUB_APP_NAME
__check_service_start $PROD_STUB_APP_NAME $PROD_STUB_PATH$PROD_STUB_ALIVE_URL
fi
@@ -186,8 +259,14 @@
__execute_curl_to_prodstub() {
TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S")
echo "(${BASH_LINENO[0]}) - ${TIMESTAMP}: ${FUNCNAME[0]}" $@ >> $HTTPLOG
- echo " CMD: $3" >> $HTTPLOG
- res="$($3)"
+ proxyflag=""
+ if [ $RUNMODE == "KUBE" ]; then
+ if [ ! -z "$CLUSTER_KUBE_PROXY_NODEPORT" ]; then
+ proxyflag=" --proxy http://localhost:$CLUSTER_KUBE_PROXY_NODEPORT"
+ fi
+ fi
+ echo " CMD: $3 $proxyflag" >> $HTTPLOG
+ res="$($3 $proxyflag)"
echo " RESP: $res" >> $HTTPLOG
retcode=$?
if [ $retcode -ne 0 ]; then
diff --git a/test/common/rapp_catalogue_api_functions.sh b/test/common/rapp_catalogue_api_functions.sh
index 777b9d3..10f64e6 100644
--- a/test/common/rapp_catalogue_api_functions.sh
+++ b/test/common/rapp_catalogue_api_functions.sh
@@ -19,6 +19,59 @@
# This is a script that contains container/service managemnt functions test functions for RAPP Catalogue API
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: [<image-tag-suffix>] (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__RC_imagesetup() {
+ __check_and_create_image_var RC "RAPP_CAT_IMAGE" "RAPP_CAT_IMAGE_BASE" "RAPP_CAT_IMAGE_TAG" $1 "$RAPP_CAT_DISPLAY_NAME"
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both arg var may contain: 'remote', 'remote-remove' or 'local'
+__RC_imagepull() {
+ __check_and_pull_image $1 "$c" $RAPP_CAT_APP_NAME $RAPP_CAT_IMAGE
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__RC_image_data() {
+ echo -e "$RAPP_CAT_DISPLAY_NAME\t$(docker images --format $1 $RAPP_CAT_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__RC_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest RC
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__RC_kube_scale_zero_and_wait() {
+ __kube_scale_and_wait_all_resources $KUBE_NONRTRIC_NAMESPACE app nonrtric-rappcatalogueservice
+ __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest RC
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__RC_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest RC
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__RC_store_docker_logs() {
+ docker logs $RAPP_CAT_APP_NAME > $1$2_rc.log 2>&1
+}
+
+#######################################################
+
## Access to RAPP Catalogue
# Host name may be changed if app started by kube
# Direct access from script
@@ -154,7 +207,9 @@
export RAPP_CAT_EXTERNAL_SECURE_PORT
export DOCKER_SIM_NWNAME
- __start_container $RAPP_CAT_COMPOSE_DIR NODOCKERARGS 1 $RAPP_CAT_APP_NAME
+ export RAPP_CAT_DISPLAY_NAME
+
+ __start_container $RAPP_CAT_COMPOSE_DIR "" NODOCKERARGS 1 $RAPP_CAT_APP_NAME
__check_service_start $RAPP_CAT_APP_NAME $RC_PATH$RAPP_CAT_ALIVE_URL
fi
diff --git a/test/common/ricsimulator_api_functions.sh b/test/common/ricsimulator_api_functions.sh
index bb057ce..c82be76 100644
--- a/test/common/ricsimulator_api_functions.sh
+++ b/test/common/ricsimulator_api_functions.sh
@@ -19,6 +19,62 @@
# This is a script that contains container/service management functions and test functions for RICSIM A1 simulators
+################ Test engine functions ################
+
+# Create the image var used during the test
+# arg: <image-tag-suffix> (selects staging, snapshot, release etc)
+# <image-tag-suffix> is present only for images with staging, snapshot,release tags
+__RICSIM_imagesetup() {
+ __check_and_create_image_var RICSIM "RIC_SIM_IMAGE" "RIC_SIM_IMAGE_BASE" "RIC_SIM_IMAGE_TAG" $1 "$RIC_SIM_DISPLAY_NAME"
+}
+
+# Pull image from remote repo or use locally built image
+# arg: <pull-policy-override> <pull-policy-original>
+# <pull-policy-override> Shall be used for images allowing overriding. For example use a local image when test is started to use released images
+# <pull-policy-original> Shall be used for images that does not allow overriding
+# Both var may contain: 'remote', 'remote-remove' or 'local'
+__RICSIM_imagepull() {
+ __check_and_pull_image $1 "$RIC_SIM_DISPLAY_NAME" $RIC_SIM_PREFIX"_"$RIC_SIM_BASE $RIC_SIM_IMAGE
+}
+
+# Generate a string for each included image using the app display name and a docker images format string
+# arg: <docker-images-format-string> <file-to-append>
+__RICSIM_image_data() {
+ echo -e "$RIC_SIM_DISPLAY_NAME\t$(docker images --format $1 $RIC_SIM_IMAGE)" >> $2
+}
+
+# Scale kubernetes resources to zero
+# All resources shall be ordered to be scaled to 0, if relevant. If not relevant to scale, then do no action.
+# This function is called for apps fully managed by the test script
+__RICSIM_kube_scale_zero() {
+ __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest RICSIM
+}
+
+# Scale kubernetes resources to zero and wait until this has been accomplished, if relevant. If not relevant to scale, then do no action.
+# This function is called for prestarted apps not managed by the test script.
+__RICSIM_kube_scale_zero_and_wait() {
+ __kube_scale_and_wait_all_resources $KUBE_NONRTRIC_NAMESPACE app nonrtric-a1simulator
+}
+
+# Delete all kube resouces for the app
+# This function is called for apps managed by the test script.
+__RICSIM_kube_delete_all() {
+ __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest RICSIM
+}
+
+# Store docker logs
+# This function is called for apps managed by the test script.
+# args: <log-dir> <file-prexix>
+__RICSIM_store_docker_logs() {
+ rics=$(docker ps --filter "name=$RIC_SIM_PREFIX" --filter "network=$DOCKER_SIM_NWNAME" --filter "status=running" --format {{.Names}})
+ for ric in $rics; do
+ docker logs $ric > $1$2_$ric.log 2>&1
+ done
+}
+
+#######################################################
+
+
RIC_SIM_HTTPX="http"
RIC_SIM_HOST=$RIC_SIM_HTTPX"://"$LOCALHOST_NAME
RIC_SIM_PORT=$RIC_SIM_INTERNAL_PORT
@@ -190,6 +246,7 @@
export RIC_SIM_INTERNAL_SECURE_PORT
export RIC_SIM_CERT_MOUNT_DIR
export DOCKER_SIM_NWNAME
+ export RIC_SIM_DISPLAY_NAME
docker_args="--scale g1=$G1_COUNT --scale g2=$G2_COUNT --scale g3=$G3_COUNT --scale g4=$G4_COUNT --scale g5=$G5_COUNT"
app_data=""
@@ -200,7 +257,7 @@
let cntr=cntr+1
done
- __start_container $RIC_SIM_COMPOSE_DIR "$docker_args" $2 $app_data
+ __start_container $RIC_SIM_COMPOSE_DIR "" "$docker_args" $2 $app_data
cntr=1
while [ $cntr -le $2 ]; do
@@ -222,14 +279,79 @@
return 0
}
+# Translate ric name to kube host name
+# args: <ric-name>
+# For test scripts
+get_kube_sim_host() {
+ name=$(echo "$1" | tr '_' '-') #kube does not accept underscore in names
+ #example gnb_1_2 -> gnb-1-2
+ set_name=$(echo $name | rev | cut -d- -f2- | rev) # Cut index part of ric name to get the name of statefulset
+ # example gnb-g1-2 -> gnb-g1 where gnb-g1-2 is the ric name and gnb-g1 is the set name
+ echo $name"."$set_name"."$KUBE_NONRTRIC_NAMESPACE
+}
+# Helper function to get a the port of a specific ric simulator
+# args: <ric-id>
+# (Not for test scripts)
+__find_sim_port() {
+ name=$1" " #Space appended to prevent matching 10 if 1 is desired....
+ cmdstr="docker inspect --format='{{(index (index .NetworkSettings.Ports \"$RIC_SIM_PORT/tcp\") 0).HostPort}}' ${name}"
+ res=$(eval $cmdstr)
+ if [[ "$res" =~ ^[0-9]+$ ]]; then
+ echo $res
+ else
+ echo "0"
+ fi
+}
+
+# Helper function to get a the port and host name of a specific ric simulator
+# args: <ric-id>
+# (Not for test scripts)
+__find_sim_host() {
+ if [ $RUNMODE == "KUBE" ]; then
+ ricname=$(echo "$1" | tr '_' '-')
+ for timeout in {1..60}; do
+ host=$(kubectl get pod $ricname -n $KUBE_NONRTRIC_NAMESPACE -o jsonpath='{.status.podIP}' 2> /dev/null)
+ if [ ! -z "$host" ]; then
+ echo $RIC_SIM_HTTPX"://"$host":"$RIC_SIM_PORT
+ return 0
+ fi
+ sleep 0.5
+ done
+ echo "host-not-found-fatal-error"
+ else
+ name=$1" " #Space appended to prevent matching 10 if 1 is desired....
+ cmdstr="docker inspect --format='{{(index (index .NetworkSettings.Ports \"$RIC_SIM_PORT/tcp\") 0).HostPort}}' ${name}"
+ res=$(eval $cmdstr)
+ if [[ "$res" =~ ^[0-9]+$ ]]; then
+ echo $RIC_SIM_HOST:$res
+ return 0
+ else
+ echo "0"
+ fi
+ fi
+ return 1
+}
+
+# Generate a UUID to use as prefix for policy ids
+generate_policy_uuid() {
+ UUID=$(python3 -c 'import sys,uuid; sys.stdout.write(uuid.uuid4().hex)')
+ #Reduce length to make space for serial id, uses 'a' as marker where the serial id is added
+ UUID=${UUID:0:${#UUID}-4}"a"
+}
# Excute a curl cmd towards a ricsimulator and check the response code.
# args: <expected-response-code> <curl-cmd-string>
__execute_curl_to_sim() {
echo ${FUNCNAME[1]} "line: "${BASH_LINENO[1]} >> $HTTPLOG
- echo " CMD: $2" >> $HTTPLOG
- res="$($2)"
+ proxyflag=""
+ if [ $RUNMODE == "KUBE" ]; then
+ if [ ! -z "$CLUSTER_KUBE_PROXY_NODEPORT" ]; then
+ proxyflag=" --proxy http://localhost:$CLUSTER_KUBE_PROXY_NODEPORT"
+ fi
+ fi
+ echo " CMD: $2 $proxyflag" >> $HTTPLOG
+ res="$($2 $proxyflag)"
echo " RESP: $res" >> $HTTPLOG
retcode=$?
if [ $retcode -ne 0 ]; then
diff --git a/test/common/test_env-onap-guilin.sh b/test/common/test_env-onap-guilin.sh
index 007d63c..809e1f7 100644
--- a/test/common/test_env-onap-guilin.sh
+++ b/test/common/test_env-onap-guilin.sh
@@ -116,8 +116,7 @@
#Http proxy remote image and tag
HTTP_PROXY_IMAGE_BASE="mitmproxy/mitmproxy"
HTTP_PROXY_IMAGE_TAG_REMOTE_PROXY="6.0.2"
-#No local image for SSDNC DB, remote image always used
-
+#No local image for http proxy, remote image always used
#ONAP Zookeeper remote image and tag
ONAP_ZOOKEEPER_IMAGE_BASE="onap/dmaap/zookeeper"
@@ -134,6 +133,12 @@
ONAP_DMAAPMR_IMAGE_TAG_REMOTE_RELEASE_ONAP="1.1.18"
#No local image for ONAP DMAAP-MR, remote image always used
+#Kube proxy remote image and tag
+KUBE_PROXY_IMAGE_BASE="mitmproxy/mitmproxy"
+KUBE_PROXY_IMAGE_TAG_REMOTE_PROXY="6.0.2"
+#No local image for http proxy, remote image always used
+
+
# List of app short names produced by the project
PROJECT_IMAGES_APP_NAMES="PA SDNC"
@@ -253,8 +258,10 @@
SDNC_API_URL="/restconf/operations/A1-ADAPTER-API:" # Base url path for SNDC API
SDNC_ALIVE_URL="/apidoc/explorer/" # Base url path for SNDC API docs (for alive check)
SDNC_COMPOSE_DIR="sdnc" # Dir in simulator_group for docker-compose
+SDNC_COMPOSE_FILE="docker-compose.yml"
+SDNC_KUBE_APP_FILE="app.yaml"
SDNC_KARAF_LOG="/opt/opendaylight/data/log/karaf.log" # Path to karaf log
-
+SDNC_RESPONSE_JSON_KEY="output" # Key name for output json in replies from sdnc
CONTROL_PANEL_APP_NAME="controlpanel" # Name of the Control Panel container
CONTROL_PANEL_DISPLAY_NAME="Non-RT RIC Control Panel"
@@ -267,6 +274,7 @@
CONTROL_PANEL_COMPOSE_DIR="control_panel" # Dir in simulator_group for docker-compose
CONTROL_PANEL_CONFIG_MOUNT_PATH=/maven # Container internal path for config
CONTROL_PANEL_CONFIG_FILE=application.properties # Config file name
+CONTROL_PANEL_HOST_MNT_DIR="./mnt" # Mounted dir, relative to compose file, on the host
HTTP_PROXY_APP_NAME="httpproxy" # Name of the Http Proxy container
HTTP_PROXY_DISPLAY_NAME="Http Proxy"
@@ -279,6 +287,18 @@
HTTP_PROXY_ALIVE_URL="/" # Base path for alive check
HTTP_PROXY_COMPOSE_DIR="httpproxy" # Dir in simulator_group for docker-compose
+
+KUBE_PROXY_APP_NAME="kubeproxy" # Name of the Kube Http Proxy container
+KUBE_PROXY_DISPLAY_NAME="Kube Http Proxy"
+KUBE_PROXY_EXTERNAL_PORT=8730 # Kube Http Proxy container external port (host -> container)
+KUBE_PROXY_INTERNAL_PORT=8080 # Kube Http Proxy container internal port (container -> container)
+KUBE_PROXY_WEB_EXTERNAL_PORT=8731 # Kube Http Proxy container external port (host -> container)
+KUBE_PROXY_WEB_INTERNAL_PORT=8081 # Kube Http Proxy container internal port (container -> container)
+KUBE_PROXY_CONFIG_PORT=0 # Port number for proxy config, will be set if proxy is started
+KUBE_PROXY_CONFIG_HOST_NAME="" # Proxy host, will be set if proxy is started
+KUBE_PROXY_ALIVE_URL="/" # Base path for alive check
+KUBE_PROXY_COMPOSE_DIR="kubeproxy" # Dir in simulator_group for docker-compose
+
########################################
# Setting for common curl-base function
########################################
diff --git a/test/common/test_env-onap-honolulu.sh b/test/common/test_env-onap-honolulu.sh
index ba79dc4..abbf824 100644
--- a/test/common/test_env-onap-honolulu.sh
+++ b/test/common/test_env-onap-honolulu.sh
@@ -59,24 +59,22 @@
# Policy Agent image and tags
POLICY_AGENT_IMAGE_BASE="onap/ccsdk-oran-a1policymanagementservice"
-POLICY_AGENT_IMAGE_TAG_LOCAL="1.1.1-SNAPSHOT"
-POLICY_AGENT_IMAGE_TAG_REMOTE_SNAPSHOT="1.1.1-SNAPSHOT"
-POLICY_AGENT_IMAGE_TAG_REMOTE="1.1.1-STAGING-latest" #Will use snapshot repo
-POLICY_AGENT_IMAGE_TAG_REMOTE_RELEASE="1.1.1"
-
+POLICY_AGENT_IMAGE_TAG_LOCAL="1.1.2-SNAPSHOT"
+POLICY_AGENT_IMAGE_TAG_REMOTE_SNAPSHOT="1.1.2-SNAPSHOT"
+POLICY_AGENT_IMAGE_TAG_REMOTE="1.1.2-STAGING-latest" #Will use snapshot repo
+POLICY_AGENT_IMAGE_TAG_REMOTE_RELEASE="1.1.2"
# SDNC A1 Controller remote image and tag
SDNC_A1_CONTROLLER_IMAGE_BASE="onap/sdnc-image"
-SDNC_A1_CONTROLLER_IMAGE_TAG_LOCAL="2.1.1-SNAPSHOT" ###CHECK THIS
-SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE_SNAPSHOT="2.1.1-STAGING-latest"
-SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE="2.1.1-STAGING-latest" #Will use snapshot repo
-SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE_RELEASE="2.1.1"
+SDNC_A1_CONTROLLER_IMAGE_TAG_LOCAL="2.1.3-SNAPSHOT" ###CHECK THIS
+SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE_SNAPSHOT="2.1.3-STAGING-latest"
+SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE="2.1.3-STAGING-latest" #Will use snapshot repo
+SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE_RELEASE="2.1.3"
#SDNC DB remote image and tag
#The DB is part of SDNC so handled in the same way as SDNC
-SDNC_DB_IMAGE_BASE="mysql/mysql-server"
-SDNC_DB_IMAGE_TAG_REMOTE_PROXY="5.6"
-
+SDNC_DB_IMAGE_BASE="mariadb"
+SDNC_DB_IMAGE_TAG_REMOTE_PROXY="10.5"
# ECS image and tag - uses cherry release
ECS_IMAGE_BASE="o-ran-sc/nonrtric-enrichment-coordinator-service"
@@ -85,7 +83,7 @@
# Control Panel image and tag - uses cherry release
CONTROL_PANEL_IMAGE_BASE="o-ran-sc/nonrtric-controlpanel"
-CONTROL_PANEL_IMAGE_TAG_REMOTE_RELEASE_ORAN="2.1.0"
+CONTROL_PANEL_IMAGE_TAG_REMOTE_RELEASE_ORAN="2.1.1"
# RAPP Catalogue image and tags - uses cherry release
@@ -131,7 +129,7 @@
#Http proxy remote image and tag
HTTP_PROXY_IMAGE_BASE="mitmproxy/mitmproxy"
HTTP_PROXY_IMAGE_TAG_REMOTE_PROXY="6.0.2"
-#No local image for SSDNC DB, remote image always used
+#No local image for http proxy, remote image always used
#ONAP Zookeeper remote image and tag
ONAP_ZOOKEEPER_IMAGE_BASE="onap/dmaap/zookeeper"
@@ -148,6 +146,10 @@
ONAP_DMAAPMR_IMAGE_TAG_REMOTE_RELEASE_ONAP="1.1.18"
#No local image for ONAP DMAAP-MR, remote image always used
+#Kube proxy remote image and tag
+KUBE_PROXY_IMAGE_BASE="mitmproxy/mitmproxy"
+KUBE_PROXY_IMAGE_TAG_REMOTE_PROXY="6.0.2"
+#No local image for http proxy, remote image always used
# List of app short names produced by the project
PROJECT_IMAGES_APP_NAMES="PA SDNC"
@@ -210,7 +212,7 @@
ECS_COMPOSE_DIR="ecs" # Dir in simulator_group for docker-compose
ECS_CONFIG_MOUNT_PATH=/opt/app/enrichment-coordinator-service/config # Internal container path for configuration
ECS_CONFIG_FILE=application.yaml # Config file name
-ECS_VERSION="V1-1" # Version where the types are added in the producer registration
+ECS_VERSION="V1-2" # Version where the types are added in the producer registration
MR_DMAAP_APP_NAME="dmaap-mr" # Name for the Dmaap MR
MR_STUB_APP_NAME="mr-stub" # Name of the MR stub
@@ -295,18 +297,24 @@
SDNC_DB_APP_NAME="sdncdb" # Name of the SDNC DB container
SDNC_A1_TRUSTSTORE_PASSWORD="a1adapter" # SDNC truststore password
SDNC_USER="admin" # SDNC username
+SDNC_PWD="admin" # SNDC PWD
SDNC_PWD="Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U" # SNDC PWD
+#SDNC_API_URL="/rests/operations/A1-ADAPTER-API:" # Base url path for SNDC API (for upgraded sdnc)
SDNC_API_URL="/restconf/operations/A1-ADAPTER-API:" # Base url path for SNDC API
SDNC_ALIVE_URL="/apidoc/explorer/" # Base url path for SNDC API docs (for alive check)
SDNC_COMPOSE_DIR="sdnc"
+SDNC_COMPOSE_FILE="docker-compose-2.yml"
+SDNC_KUBE_APP_FILE="app2.yaml"
SDNC_KARAF_LOG="/opt/opendaylight/data/log/karaf.log" # Path to karaf log
+#SDNC_RESPONSE_JSON_KEY="A1-ADAPTER-API:output" # Key name for output json in replies from sdnc (for upgraded sdnc)
+SDNC_RESPONSE_JSON_KEY="output" # Key name for output json in replies from sdnc
RAPP_CAT_APP_NAME="rappcatalogueservice" # Name for the RAPP Catalogue
RAPP_CAT_DISPLAY_NAME="RAPP Catalogue Service"
RAPP_CAT_EXTERNAL_PORT=8680 # RAPP Catalogue container external port (host -> container)
-RAPP_CAT_INTERNAL_PORT=8080 # RAPP Catalogue container internal port (container -> container)
+RAPP_CAT_INTERNAL_PORT=8680 # RAPP Catalogue container internal port (container -> container)
RAPP_CAT_EXTERNAL_SECURE_PORT=8633 # RAPP Catalogue container external secure port (host -> container)
-RAPP_CAT_INTERNAL_SECURE_PORT=8433 # RAPP Catalogue container internal secure port (container -> container)
+RAPP_CAT_INTERNAL_SECURE_PORT=8633 # RAPP Catalogue container internal secure port (container -> container)
RAPP_CAT_ALIVE_URL="/services" # Base path for alive check
RAPP_CAT_COMPOSE_DIR="rapp_catalogue" # Dir in simulator_group for docker-compose
@@ -321,6 +329,7 @@
CONTROL_PANEL_COMPOSE_DIR="control_panel" # Dir in simulator_group for docker-compose
CONTROL_PANEL_CONFIG_MOUNT_PATH=/maven # Container internal path for config
CONTROL_PANEL_CONFIG_FILE=application.properties # Config file name
+CONTROL_PANEL_HOST_MNT_DIR="./mnt" # Mounted dir, relative to compose file, on the host
HTTP_PROXY_APP_NAME="httpproxy" # Name of the Http Proxy container
HTTP_PROXY_DISPLAY_NAME="Http Proxy"
@@ -333,6 +342,18 @@
HTTP_PROXY_ALIVE_URL="/" # Base path for alive check
HTTP_PROXY_COMPOSE_DIR="httpproxy" # Dir in simulator_group for docker-compose
+
+KUBE_PROXY_APP_NAME="kubeproxy" # Name of the Kube Http Proxy container
+KUBE_PROXY_DISPLAY_NAME="Kube Http Proxy"
+KUBE_PROXY_EXTERNAL_PORT=8730 # Kube Http Proxy container external port (host -> container)
+KUBE_PROXY_INTERNAL_PORT=8080 # Kube Http Proxy container internal port (container -> container)
+KUBE_PROXY_WEB_EXTERNAL_PORT=8731 # Kube Http Proxy container external port (host -> container)
+KUBE_PROXY_WEB_INTERNAL_PORT=8081 # Kube Http Proxy container internal port (container -> container)
+KUBE_PROXY_CONFIG_PORT=0 # Port number for proxy config, will be set if proxy is started
+KUBE_PROXY_CONFIG_HOST_NAME="" # Proxy host, will be set if proxy is started
+KUBE_PROXY_ALIVE_URL="/" # Base path for alive check
+KUBE_PROXY_COMPOSE_DIR="kubeproxy" # Dir in simulator_group for docker-compose
+
########################################
# Setting for common curl-base function
########################################
diff --git a/test/common/test_env-oran-cherry.sh b/test/common/test_env-oran-cherry.sh
index c1e196e..15b5854 100755
--- a/test/common/test_env-oran-cherry.sh
+++ b/test/common/test_env-oran-cherry.sh
@@ -160,6 +160,11 @@
ONAP_DMAAPMR_IMAGE_TAG_REMOTE_RELEASE_ONAP="1.1.18"
#No local image for ONAP DMAAP-MR, remote image always used
+#Kube proxy remote image and tag
+KUBE_PROXY_IMAGE_BASE="mitmproxy/mitmproxy"
+KUBE_PROXY_IMAGE_TAG_REMOTE_PROXY="6.0.2"
+#No local image for http proxy, remote image always used
+
# List of app short names produced by the project
PROJECT_IMAGES_APP_NAMES="PA ECS CP SDNC RC RICSIM"
@@ -310,7 +315,10 @@
SDNC_API_URL="/restconf/operations/A1-ADAPTER-API:" # Base url path for SNDC API
SDNC_ALIVE_URL="/apidoc/explorer/" # Base url path for SNDC API docs (for alive check)
SDNC_COMPOSE_DIR="sdnc" # Dir in simulator_group for docker-compose
+SDNC_COMPOSE_FILE="docker-compose.yml"
+SDNC_KUBE_APP_FILE="app.yaml"
SDNC_KARAF_LOG="/opt/opendaylight/data/log/karaf.log" # Path to karaf log
+SDNC_RESPONSE_JSON_KEY="output" # Key name for output json in replies from sdnc
RAPP_CAT_APP_NAME="rappcatalogueservice" # Name for the RAPP Catalogue
RAPP_CAT_DISPLAY_NAME="RAPP Catalogue Service"
@@ -332,6 +340,7 @@
CONTROL_PANEL_COMPOSE_DIR="control_panel" # Dir in simulator_group for docker-compose
CONTROL_PANEL_CONFIG_MOUNT_PATH=/maven # Container internal path for config
CONTROL_PANEL_CONFIG_FILE=application.properties # Config file name
+CONTROL_PANEL_HOST_MNT_DIR="./mnt" # Mounted dir, relative to compose file, on the host
HTTP_PROXY_APP_NAME="httpproxy" # Name of the Http Proxy container
HTTP_PROXY_DISPLAY_NAME="Http Proxy"
@@ -344,6 +353,18 @@
HTTP_PROXY_ALIVE_URL="/" # Base path for alive check
HTTP_PROXY_COMPOSE_DIR="httpproxy" # Dir in simulator_group for docker-compose
+
+KUBE_PROXY_APP_NAME="kubeproxy" # Name of the Kube Http Proxy container
+KUBE_PROXY_DISPLAY_NAME="Kube Http Proxy"
+KUBE_PROXY_EXTERNAL_PORT=8730 # Kube Http Proxy container external port (host -> container)
+KUBE_PROXY_INTERNAL_PORT=8080 # Kube Http Proxy container internal port (container -> container)
+KUBE_PROXY_WEB_EXTERNAL_PORT=8731 # Kube Http Proxy container external port (host -> container)
+KUBE_PROXY_WEB_INTERNAL_PORT=8081 # Kube Http Proxy container internal port (container -> container)
+KUBE_PROXY_CONFIG_PORT=0 # Port number for proxy config, will be set if proxy is started
+KUBE_PROXY_CONFIG_HOST_NAME="" # Proxy host, will be set if proxy is started
+KUBE_PROXY_ALIVE_URL="/" # Base path for alive check
+KUBE_PROXY_COMPOSE_DIR="kubeproxy" # Dir in simulator_group for docker-compose
+
########################################
# Setting for common curl-base function
########################################
diff --git a/test/common/test_env-oran-dawn.sh b/test/common/test_env-oran-dawn.sh
index f3baa03..96a4186 100755
--- a/test/common/test_env-oran-dawn.sh
+++ b/test/common/test_env-oran-dawn.sh
@@ -73,43 +73,44 @@
ECS_IMAGE_TAG_REMOTE_RELEASE="1.1.0"
-# Control Panel image and tags
-# CONTROL_PANEL_IMAGE_BASE="o-ran-sc/nonrtric-controlpanel"
-# CONTROL_PANEL_IMAGE_TAG_LOCAL="2.2.0-SNAPSHOT"
-# CONTROL_PANEL_IMAGE_TAG_REMOTE_SNAPSHOT="2.2.0-SNAPSHOT"
-# CONTROL_PANEL_IMAGE_TAG_REMOTE="2.2.0"
-# CONTROL_PANEL_IMAGE_TAG_REMOTE_RELEASE="2.2.0"
-
-
-###########################################################
-##### Temporarily using cherry as dawn version is corrupted
-###########################################################
-# Control Panel image and tags
+#Control Panel image and tags
CONTROL_PANEL_IMAGE_BASE="o-ran-sc/nonrtric-controlpanel"
-CONTROL_PANEL_IMAGE_TAG_LOCAL="2.1.0-SNAPSHOT"
-CONTROL_PANEL_IMAGE_TAG_REMOTE_SNAPSHOT="2.1.0-SNAPSHOT"
-CONTROL_PANEL_IMAGE_TAG_REMOTE="2.1.0"
-CONTROL_PANEL_IMAGE_TAG_REMOTE_RELEASE="2.1.0"
+CONTROL_PANEL_IMAGE_TAG_LOCAL="2.2.0-SNAPSHOT"
+CONTROL_PANEL_IMAGE_TAG_REMOTE_SNAPSHOT="2.2.0-SNAPSHOT"
+CONTROL_PANEL_IMAGE_TAG_REMOTE="2.2.0"
+CONTROL_PANEL_IMAGE_TAG_REMOTE_RELEASE="2.2.0"
+# Gateway image and tags
+NRT_GATEWAY_IMAGE_BASE="o-ran-sc/nonrtric-gateway"
+NRT_GATEWAY_IMAGE_TAG_LOCAL="0.0.1-SNAPSHOT"
+NRT_GATEWAY_IMAGE_TAG_REMOTE_SNAPSHOT="0.0.1-SNAPSHOT"
+NRT_GATEWAY_IMAGE_TAG_REMOTE="0.0.1"
+NRT_GATEWAY_IMAGE_TAG_REMOTE_RELEASE="1.0.0"
+
+
+# SDNC A1 Controller image and tags - Note using ONAP image
+SDNC_A1_CONTROLLER_IMAGE_BASE="onap/sdnc-image"
+SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE_RELEASE_ONAP="2.1.2"
+#No local image for ONAP SDNC, remote release image always used
+
+# ORAN SDNC adapter kept as reference
# SDNC A1 Controller image and tags - still using cherry version, no new version for dawn
-SDNC_A1_CONTROLLER_IMAGE_BASE="o-ran-sc/nonrtric-a1-controller"
-SDNC_A1_CONTROLLER_IMAGE_TAG_LOCAL="2.0.1-SNAPSHOT"
-SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE_SNAPSHOT="2.0.1-SNAPSHOT"
-SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE="2.0.1"
-SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE_RELEASE="2.0.1"
-
-# SDNC A1 Controller image and tags - intended versions for dawn - not yet present
-# SDNC_A1_CONTROLLER_IMAGE_BASE="o-ran-sc/nonrtric-a1-controller"
-# SDNC_A1_CONTROLLER_IMAGE_TAG_LOCAL="2.1.0-SNAPSHOT"
-# SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE_SNAPSHOT="2.1.0-SNAPSHOT"
-# SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE="2.1.0"
-# SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE_RELEASE="2.1.0"
-
+#SDNC_A1_CONTROLLER_IMAGE_BASE="o-ran-sc/nonrtric-a1-controller"
+#SDNC_A1_CONTROLLER_IMAGE_TAG_LOCAL="2.0.1-SNAPSHOT"
+#SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE_SNAPSHOT="2.0.1-SNAPSHOT"
+#SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE="2.0.1"
+#SDNC_A1_CONTROLLER_IMAGE_TAG_REMOTE_RELEASE="2.0.1"
#SDNC DB remote image and tag
-SDNC_DB_IMAGE_BASE="mysql/mysql-server"
-SDNC_DB_IMAGE_TAG_REMOTE_PROXY="5.6"
+#The DB is part of SDNC so handled in the same way as SDNC
+SDNC_DB_IMAGE_BASE="mariadb"
+SDNC_DB_IMAGE_TAG_REMOTE_PROXY="10.5"
+
+#Older SDNC db image kept for reference
+#SDNC DB remote image and tag
+#SDNC_DB_IMAGE_BASE="mysql/mysql-server"
+#SDNC_DB_IMAGE_TAG_REMOTE_PROXY="5.6"
#No local image for SSDNC DB, remote image always used
@@ -161,7 +162,7 @@
#Http proxy remote image and tag
HTTP_PROXY_IMAGE_BASE="mitmproxy/mitmproxy"
HTTP_PROXY_IMAGE_TAG_REMOTE_PROXY="6.0.2"
-#No local image for SSDNC DB, remote image always used
+#No local image for http proxy, remote image always used
#ONAP Zookeeper remote image and tag
ONAP_ZOOKEEPER_IMAGE_BASE="onap/dmaap/zookeeper"
@@ -178,14 +179,19 @@
ONAP_DMAAPMR_IMAGE_TAG_REMOTE_RELEASE_ONAP="1.1.18"
#No local image for ONAP DMAAP-MR, remote image always used
+#Kube proxy remote image and tag
+KUBE_PROXY_IMAGE_BASE="mitmproxy/mitmproxy"
+KUBE_PROXY_IMAGE_TAG_REMOTE_PROXY="6.0.2"
+#No local image for http proxy, remote image always used
+
# List of app short names produced by the project
-PROJECT_IMAGES_APP_NAMES="PA ECS CP SDNC RC RICSIM"
+PROJECT_IMAGES_APP_NAMES="PA ECS CP RC RICSIM NGW" # Add SDNC here if oran image is used
# List of app short names which images pulled from ORAN
ORAN_IMAGES_APP_NAMES="" # Not used
# List of app short names which images pulled from ONAP
-ONAP_IMAGES_APP_NAMES="CBS DMAAPMR"
+ONAP_IMAGES_APP_NAMES="CBS DMAAPMR SDNC" # SDNC added as ONAP image
########################################
@@ -266,7 +272,7 @@
CR_APP_NAME="callback-receiver" # Name for the Callback receiver
-CR_DISPLAY_NAME="RAPP Catalogue"
+CR_DISPLAY_NAME="Callback receiver"
CR_EXTERNAL_PORT=8090 # Callback receiver container external port (host -> container)
CR_INTERNAL_PORT=8090 # Callback receiver container internal port (container -> container)
CR_EXTERNAL_SECURE_PORT=8091 # Callback receiver container external secure port (host -> container)
@@ -315,6 +321,26 @@
RIC_SIM_COMPOSE_DIR="ric" # Dir in simulator group for docker compose
RIC_SIM_ALIVE_URL="/" # Base path for alive check
+# Kept as reference for oran a1 adapter
+# SDNC_APP_NAME="a1controller" # Name of the SNDC A1 Controller container
+# SDNC_DISPLAY_NAME="SDNC A1 Controller"
+# SDNC_EXTERNAL_PORT=8282 # SNDC A1 Controller container external port (host -> container)
+# SDNC_INTERNAL_PORT=8181 # SNDC A1 Controller container internal port (container -> container)
+# SDNC_EXTERNAL_SECURE_PORT=8443 # SNDC A1 Controller container external securee port (host -> container)
+# SDNC_INTERNAL_SECURE_PORT=8443 # SNDC A1 Controller container internal secure port (container -> container)
+# SDNC_DB_APP_NAME="sdncdb" # Name of the SDNC DB container
+# SDNC_A1_TRUSTSTORE_PASSWORD="" # SDNC truststore password
+# SDNC_USER="admin" # SDNC username
+# SDNC_PWD="Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U" # SNDC PWD
+# SDNC_API_URL="/restconf/operations/A1-ADAPTER-API:" # Base url path for SNDC API
+# SDNC_ALIVE_URL="/apidoc/explorer/" # Base url path for SNDC API docs (for alive check)
+# SDNC_COMPOSE_DIR="sdnc" # Dir in simulator_group for docker-compose
+# SDNC_COMPOSE_FILE="docker-compose.yml"
+# SDNC_KUBE_APP_FILE="app.yaml"
+# SDNC_KARAF_LOG="/opt/opendaylight/data/log/karaf.log" # Path to karaf log
+# SDNC_RESPONSE_JSON_KEY="output" # Key name for output json in replies from sdnc
+
+# For ONAP sdan
SDNC_APP_NAME="a1controller" # Name of the SNDC A1 Controller container
SDNC_DISPLAY_NAME="SDNC A1 Controller"
SDNC_EXTERNAL_PORT=8282 # SNDC A1 Controller container external port (host -> container)
@@ -322,13 +348,19 @@
SDNC_EXTERNAL_SECURE_PORT=8443 # SNDC A1 Controller container external securee port (host -> container)
SDNC_INTERNAL_SECURE_PORT=8443 # SNDC A1 Controller container internal secure port (container -> container)
SDNC_DB_APP_NAME="sdncdb" # Name of the SDNC DB container
-SDNC_A1_TRUSTSTORE_PASSWORD="" # SDNC truststore password
+SDNC_A1_TRUSTSTORE_PASSWORD="a1adapter" # SDNC truststore password
SDNC_USER="admin" # SDNC username
+SDNC_PWD="admin" # SNDC PWD
SDNC_PWD="Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U" # SNDC PWD
+#SDNC_API_URL="/rests/operations/A1-ADAPTER-API:" # Base url path for SNDC API (for upgraded sdnc)
SDNC_API_URL="/restconf/operations/A1-ADAPTER-API:" # Base url path for SNDC API
SDNC_ALIVE_URL="/apidoc/explorer/" # Base url path for SNDC API docs (for alive check)
-SDNC_COMPOSE_DIR="sdnc" # Dir in simulator_group for docker-compose
+SDNC_COMPOSE_DIR="sdnc"
+SDNC_COMPOSE_FILE="docker-compose-2.yml"
+SDNC_KUBE_APP_FILE="app2.yaml"
SDNC_KARAF_LOG="/opt/opendaylight/data/log/karaf.log" # Path to karaf log
+#SDNC_RESPONSE_JSON_KEY="A1-ADAPTER-API:output" # Key name for output json in replies from sdnc (for upgraded sdnc)
+SDNC_RESPONSE_JSON_KEY="output" # Key name for output json in replies from sdnc
RAPP_CAT_APP_NAME="rappcatalogueservice" # Name for the RAPP Catalogue
RAPP_CAT_DISPLAY_NAME="RAPP Catalogue"
@@ -345,11 +377,27 @@
CONTROL_PANEL_INTERNAL_PORT=8080 # Control Panel container internal port (container -> container)
CONTROL_PANEL_EXTERNAL_SECURE_PORT=8880 # Control Panel container external port (host -> container)
CONTROL_PANEL_INTERNAL_SECURE_PORT=8082 # Control Panel container internal port (container -> container)
-CONTROL_PANEL_LOGPATH="/logs/nonrtric-controlpanel.log" # Path the application log in the Control Panel container
+CONTROL_PANEL_LOGPATH="/var/log/nonrtric-gateway/application.log" # Path the application log in the Control Panel container
CONTROL_PANEL_ALIVE_URL="/" # Base path for alive check
CONTROL_PANEL_COMPOSE_DIR="control_panel" # Dir in simulator_group for docker-compose
-CONTROL_PANEL_CONFIG_MOUNT_PATH=/maven # Container internal path for config
-CONTROL_PANEL_CONFIG_FILE=application.properties # Config file name
+CONTROL_PANEL_CONFIG_FILE=nginx.conf # Config file name
+CONTROL_PANEL_HOST_MNT_DIR="./mnt" # Mounted dir, relative to compose file, on the host
+CONTROL_PANEL_CONFIG_MOUNT_PATH=/etc/nginx # Container internal path for config
+
+NRT_GATEWAY_APP_NAME="nonrtricgateway" # Name of the Gateway container
+NRT_GATEWAY_DISPLAY_NAME="NonRT-RIC Gateway"
+NRT_GATEWAY_EXTERNAL_PORT=9090 # Gateway container external port (host -> container)
+NRT_GATEWAY_INTERNAL_PORT=9090 # Gateway container internal port (container -> container)
+NRT_GATEWAY_EXTERNAL_SECURE_PORT=9091 # Gateway container external port (host -> container)
+NRT_GATEWAY_INTERNAL_SECURE_PORT=9091 # Gateway container internal port (container -> container)
+NRT_GATEWAY_LOGPATH="/var/log/nonrtric-gateway/application.log" # Path the application log in the Gateway container
+NRT_GATEWAY_HOST_MNT_DIR="./mnt" # Mounted dir, relative to compose file, on the host
+NRT_GATEWAY_ALIVE_URL="/actuator/metrics" # Base path for alive check
+NRT_GATEWAY_COMPOSE_DIR="ngw" # Dir in simulator_group for docker-compose
+NRT_GATEWAY_CONFIG_MOUNT_PATH=/opt/app/nonrtric-gateway/config # Container internal path for config
+NRT_GATEWAY_CONFIG_FILE=application.yaml # Config file name
+NRT_GATEWAY_PKG_NAME="org.springframework.cloud.gateway" # Java base package name
+NRT_GATEWAY_ACTUATOR="/actuator/loggers/$NRT_GATEWAY_PKG_NAME" # Url for trace/debug
HTTP_PROXY_APP_NAME="httpproxy" # Name of the Http Proxy container
HTTP_PROXY_DISPLAY_NAME="Http Proxy"
@@ -362,6 +410,17 @@
HTTP_PROXY_ALIVE_URL="/" # Base path for alive check
HTTP_PROXY_COMPOSE_DIR="httpproxy" # Dir in simulator_group for docker-compose
+KUBE_PROXY_APP_NAME="kubeproxy" # Name of the Kube Http Proxy container
+KUBE_PROXY_DISPLAY_NAME="Kube Http Proxy"
+KUBE_PROXY_EXTERNAL_PORT=8730 # Kube Http Proxy container external port (host -> container)
+KUBE_PROXY_INTERNAL_PORT=8080 # Kube Http Proxy container internal port (container -> container)
+KUBE_PROXY_WEB_EXTERNAL_PORT=8731 # Kube Http Proxy container external port (host -> container)
+KUBE_PROXY_WEB_INTERNAL_PORT=8081 # Kube Http Proxy container internal port (container -> container)
+KUBE_PROXY_CONFIG_PORT=0 # Port number for proxy config, will be set if proxy is started
+KUBE_PROXY_CONFIG_HOST_NAME="" # Proxy host, will be set if proxy is started
+KUBE_PROXY_ALIVE_URL="/" # Base path for alive check
+KUBE_PROXY_COMPOSE_DIR="kubeproxy" # Dir in simulator_group for docker-compose
+
########################################
# Setting for common curl-base function
########################################
diff --git a/test/common/testcase_common.sh b/test/common/testcase_common.sh
index adf2ce2..3e1003d 100755
--- a/test/common/testcase_common.sh
+++ b/test/common/testcase_common.sh
@@ -21,9 +21,7 @@
# Specific test function are defined in scripts XXXX_functions.sh
. ../common/api_curl.sh
-
-# List of short names for all supported apps, including simulators etc
-APP_SHORT_NAMES="PA RICSIM SDNC CP ECS RC CBS CONSUL RC MR DMAAPMR CR PRODSTUB"
+. ../common/testengine_config.sh
__print_args() {
echo "Args: remote|remote-remove docker|kube --env-file <environment-filename> [release] [auto-clean] [--stop-at-error] "
@@ -58,6 +56,7 @@
exit 0
fi
+AUTOTEST_HOME=$PWD
# Create a test case id, ATC (Auto Test Case), from the name of the test case script.
# FTC1.sh -> ATC == FTC1
ATC=$(basename "${BASH_SOURCE[$i+1]}" .sh)
@@ -105,8 +104,6 @@
# Var to hold the app names to use remote release images for
USE_RELEASE_IMAGES=""
-# List of available apps to override with local or remote staging/snapshot/release image
-AVAILABLE_IMAGES_OVERRIDE="PA ECS CP SDNC RICSIM RC"
# Use this var (STOP_AT_ERROR=1 in the test script) for debugging/trouble shooting to take all logs and exit at first FAIL test case
STOP_AT_ERROR=0
@@ -143,9 +140,21 @@
# Create the tmp dir for temporary files that is not needed after the test
# hidden files for the test env is still stored in the current dir
+# files in the ./tmp is moved to ./tmp/prev when a new test is started
if [ ! -d "tmp" ]; then
mkdir tmp
fi
+curdir=$PWD
+cd tmp
+if [ $? -ne 0 ]; then
+ echo "Cannot cd to $PWD/tmp"
+ echo "Dir cannot be created. Exiting...."
+fi
+if [ ! -d "prev" ]; then
+ mkdir prev
+fi
+cd $curdir
+mv ./tmp/* ./tmp/prev 2> /dev/null
# Create a http message log for this testcase
HTTPLOG=$PWD"/.httplog_"$ATC".txt"
@@ -300,6 +309,7 @@
echo "-------------------------------------------------------------------------------------------------"
echo "----------------------------------- Test case setup -----------------------------------"
+echo "Setting AUTOTEST_HOME="$AUTOTEST_HOME
START_ARG=$1
paramerror=0
paramerror_str=""
@@ -536,7 +546,7 @@
echo -e $RED"Selected env var file does not exist: "$TEST_ENV_VAR_FILE$ERED
echo " Select one of following env var file matching the intended target of the test"
echo " Restart the test using the flag '--env-file <path-to-env-file>"
- ls ../common/test_env* | indent1
+ ls $AUTOTEST_HOME/../common/test_env* | indent1
exit 1
fi
@@ -597,29 +607,41 @@
IMAGE_ERR=0
#Create a file with image info for later printing as a table
image_list_file="./tmp/.image-list"
-echo -e " Container\tImage\ttag\ttag-switch" > $image_list_file
+echo -e "Application\tApp short name\tImage\ttag\ttag-switch" > $image_list_file
# Check if image env var is set and if so export the env var with image to use (used by docker compose files)
-# arg: <image name> <target-variable-name> <image-variable-name> <image-tag-variable-name> <tag-suffix> <app-short-name>
+# arg: <app-short-name> <target-variable-name> <image-variable-name> <image-tag-variable-name> <tag-suffix> <image name>
__check_and_create_image_var() {
if [ $# -ne 6 ]; then
- echo "Expected arg: <image name> <target-variable-name> <image-variable-name> <image-tag-variable-name> <tag-suffix> <app-short-name>"
+ echo "Expected arg: <app-short-name> <target-variable-name> <image-variable-name> <image-tag-variable-name> <tag-suffix> <image name>"
((IMAGE_ERR++))
return
fi
- __check_included_image $6
+
+ __check_included_image $1
if [ $? -ne 0 ]; then
- echo -e "$1\t<image-excluded>\t<no-tag>" >> $image_list_file
+ echo -e "$6\t$1\t<image-excluded>\t<no-tag>" >> $image_list_file
# Image is excluded since the corresponding app is not used in this test
return
fi
- tmp=${1}"\t"
+ tmp=${6}"\t"${1}"\t"
#Create var from the input var names
image="${!3}"
tmptag=$4"_"$5
tag="${!tmptag}"
if [ -z $image ]; then
+ __check_ignore_image $1
+ if [ $? -eq 0 ]; then
+ app_ds=$6
+ if [ -z "$6" ]; then
+ app_ds="<app ignored>"
+ fi
+ echo -e "$app_ds\t$1\t<image-ignored>\t<no-tag>" >> $image_list_file
+ # Image is ignored since the corresponding the images is not set in the env file
+ __remove_included_image $1 # Remove the image from the list of included images
+ return
+ fi
echo -e $RED"\$"$3" not set in $TEST_ENV_VAR_FILE"$ERED
((IMAGE_ERR++))
echo ""
@@ -672,6 +694,52 @@
return 1
}
+# Check if app uses a project image
+# Returns 0 if image is included, 1 if not
+__check_project_image() {
+ for im in $PROJECT_IMAGES; do
+ if [ "$1" == "$im" ]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Check if app uses image built by the test script
+# Returns 0 if image is included, 1 if not
+__check_image_local_build() {
+ for im in $LOCAL_IMAGE_BUILD; do
+ if [ "$1" == "$im" ]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Check if app image is conditionally ignored in this test run
+# Returns 0 if image is conditionally ignored, 1 if not
+__check_ignore_image() {
+ for im in $CONDITIONALLY_IGNORED_IMAGES; do
+ if [ "$1" == "$im" ]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Removed image from included list of included images
+# Used when an image is marked as conditionally ignored
+__remove_included_image() {
+ tmp_img_rem_list=""
+ for im in $INCLUDED_IMAGES; do
+ if [ "$1" != "$im" ]; then
+ tmp_img_rem_list=$tmp_img_rem_list" "$im
+ fi
+ done
+ INCLUDED_IMAGES=$tmp_img_rem_list
+ return 0
+}
+
# Check if app is included in the prestarted set of apps
# Returns 0 if image is included, 1 if not
__check_prestarted_image() {
@@ -762,146 +830,6 @@
return 0
}
-# Check that image env setting are available
-echo ""
-
-#Agent image
-__check_included_image 'PA'
- if [ $? -eq 0 ]; then
- IMAGE_SUFFIX=$(__check_image_override 'PA')
- if [ $? -ne 0 ]; then
- echo -e $RED"Image setting from cmd line not consistent for PA."$ERED
- ((IMAGE_ERR++))
- fi
- __check_and_create_image_var " Policy Agent" "POLICY_AGENT_IMAGE" "POLICY_AGENT_IMAGE_BASE" "POLICY_AGENT_IMAGE_TAG" $IMAGE_SUFFIX PA
-fi
-
-#Remote Control Panel image
-__check_included_image 'CP'
-if [ $? -eq 0 ]; then
- IMAGE_SUFFIX=$(__check_image_override 'CP')
- if [ $? -ne 0 ]; then
- echo -e $RED"Image setting from cmd line not consistent for CP."$ERED
- ((IMAGE_ERR++))
- fi
- __check_and_create_image_var " Control Panel" "CONTROL_PANEL_IMAGE" "CONTROL_PANEL_IMAGE_BASE" "CONTROL_PANEL_IMAGE_TAG" $IMAGE_SUFFIX CP
-fi
-
-#Remote SDNC image
-__check_included_image 'SDNC'
-if [ $? -eq 0 ]; then
- IMAGE_SUFFIX=$(__check_image_override 'SDNC')
- if [ $? -ne 0 ]; then
- echo -e $RED"Image setting from cmd line not consistent for SDNC."$ERED
- ((IMAGE_ERR++))
- fi
- __check_and_create_image_var " SDNC A1 Controller" "SDNC_A1_CONTROLLER_IMAGE" "SDNC_A1_CONTROLLER_IMAGE_BASE" "SDNC_A1_CONTROLLER_IMAGE_TAG" $IMAGE_SUFFIX SDNC
-fi
-
-#Remote ric sim image
-__check_included_image 'RICSIM'
-if [ $? -eq 0 ]; then
- IMAGE_SUFFIX=$(__check_image_override 'RICSIM')
- if [ $? -ne 0 ]; then
- echo -e $RED"Image setting from cmd line not consistent for RICSIM."$ERED
- ((IMAGE_ERR++))
- fi
- __check_and_create_image_var " RIC Simulator" "RIC_SIM_IMAGE" "RIC_SIM_IMAGE_BASE" "RIC_SIM_IMAGE_TAG" $IMAGE_SUFFIX RICSIM
-fi
-
-#Remote ecs image
-__check_included_image 'ECS'
-if [ $? -eq 0 ]; then
- IMAGE_SUFFIX=$(__check_image_override 'ECS')
- if [ $? -ne 0 ]; then
- echo -e $RED"Image setting from cmd line not consistent for ECS."$EREDs
- ((IMAGE_ERR++))
- fi
- __check_and_create_image_var " ECS" "ECS_IMAGE" "ECS_IMAGE_BASE" "ECS_IMAGE_TAG" $IMAGE_SUFFIX ECS
-fi
-
-#Remote rc image
-__check_included_image 'RC'
-if [ $? -eq 0 ]; then
- IMAGE_SUFFIX=$(__check_image_override 'RC')
- if [ $? -ne 0 ]; then
- echo -e $RED"Image setting from cmd line not consistent for RC."$ERED
- ((IMAGE_ERR++))
- fi
- __check_and_create_image_var " RC" "RAPP_CAT_IMAGE" "RAPP_CAT_IMAGE_BASE" "RAPP_CAT_IMAGE_TAG" $IMAGE_SUFFIX RC
-fi
-
-# These images are not built as part of this project official images, just check that env vars are set correctly
-__check_included_image 'MR'
-if [ $? -eq 0 ]; then
- __check_and_create_image_var " Message Router stub" "MRSTUB_IMAGE" "MRSTUB_IMAGE_BASE" "MRSTUB_IMAGE_TAG" LOCAL MR
-fi
-__check_included_image 'DMAAPMR'
-if [ $? -eq 0 ]; then
- __check_and_create_image_var " DMAAP Message Router" "ONAP_DMAAPMR_IMAGE" "ONAP_DMAAPMR_IMAGE_BASE" "ONAP_DMAAPMR_IMAGE_TAG" REMOTE_RELEASE_ONAP DMAAPMR
- __check_and_create_image_var " ZooKeeper" "ONAP_ZOOKEEPER_IMAGE" "ONAP_ZOOKEEPER_IMAGE_BASE" "ONAP_ZOOKEEPER_IMAGE_TAG" REMOTE_RELEASE_ONAP DMAAPMR
- __check_and_create_image_var " Kafka" "ONAP_KAFKA_IMAGE" "ONAP_KAFKA_IMAGE_BASE" "ONAP_KAFKA_IMAGE_TAG" REMOTE_RELEASE_ONAP DMAAPMR
-fi
-__check_included_image 'CR'
-if [ $? -eq 0 ]; then
- __check_and_create_image_var " Callback Receiver" "CR_IMAGE" "CR_IMAGE_BASE" "CR_IMAGE_TAG" LOCAL CR
-fi
-__check_included_image 'PRODSTUB'
-if [ $? -eq 0 ]; then
- __check_and_create_image_var " Producer stub" "PROD_STUB_IMAGE" "PROD_STUB_IMAGE_BASE" "PROD_STUB_IMAGE_TAG" LOCAL PRODSTUB
-fi
-__check_included_image 'CONSUL'
-if [ $? -eq 0 ]; then
- __check_and_create_image_var " Consul" "CONSUL_IMAGE" "CONSUL_IMAGE_BASE" "CONSUL_IMAGE_TAG" REMOTE_PROXY CONSUL
-fi
-__check_included_image 'CBS'
-if [ $? -eq 0 ]; then
- __check_and_create_image_var " CBS" "CBS_IMAGE" "CBS_IMAGE_BASE" "CBS_IMAGE_TAG" REMOTE_RELEASE_ONAP CBS
-fi
-__check_included_image 'SDNC'
-if [ $? -eq 0 ]; then
- __check_and_create_image_var " SDNC DB" "SDNC_DB_IMAGE" "SDNC_DB_IMAGE_BASE" "SDNC_DB_IMAGE_TAG" REMOTE_PROXY SDNC #Uses sdnc app name
-fi
-__check_included_image 'HTTPPROXY'
-if [ $? -eq 0 ]; then
- __check_and_create_image_var " Http Proxy" "HTTP_PROXY_IMAGE" "HTTP_PROXY_IMAGE_BASE" "HTTP_PROXY_IMAGE_TAG" REMOTE_PROXY HTTPPROXY
-fi
-
-#Errors in image setting - exit
-if [ $IMAGE_ERR -ne 0 ]; then
- exit 1
-fi
-
-#Print a tables of the image settings
-echo -e $BOLD"Images configured for start arg: "$START $EBOLD
-column -t -s $'\t' $image_list_file
-
-echo ""
-
-
-#Set the SIM_GROUP var
-echo -e $BOLD"Setting var to main dir of all container/simulator scripts"$EBOLD
-if [ -z "$SIM_GROUP" ]; then
- SIM_GROUP=$PWD/../simulator-group
- if [ ! -d $SIM_GROUP ]; then
- echo "Trying to set env var SIM_GROUP to dir 'simulator-group' in the nontrtric repo, but failed."
- echo -e $RED"Please set the SIM_GROUP manually in the applicable $TEST_ENV_VAR_FILE"$ERED
- exit 1
- else
- echo " SIM_GROUP auto set to: " $SIM_GROUP
- fi
-elif [ $SIM_GROUP = *simulator_group ]; then
- echo -e $RED"Env var SIM_GROUP does not seem to point to dir 'simulator-group' in the repo, check $TEST_ENV_VAR_FILE"$ERED
- exit 1
-else
- echo " SIM_GROUP env var already set to: " $SIM_GROUP
-fi
-
-echo ""
-
-#Temp var to check for image pull errors
-IMAGE_ERR=0
-
#Function to check if image exist and stop+remove the container+pull new images as needed
#args <script-start-arg> <descriptive-image-name> <container-base-name> <image-with-tag>
__check_and_pull_image() {
@@ -922,7 +850,7 @@
if [ $1 == "remote-remove" ]; then
if [ $RUNMODE == "DOCKER" ]; then
echo -ne " Attempt to stop and remove container(s), if running - ${SAMELINE}"
- tmp="$(docker ps -aq --filter name=${3})"
+ tmp=$(docker ps -aq --filter name=${3} --filter network=${DOCKER_SIM_NWNAME})
if [ $? -eq 0 ] && [ ! -z "$tmp" ]; then
docker stop $tmp &> ./tmp/.dockererr
if [ $? -ne 0 ]; then
@@ -934,7 +862,7 @@
fi
fi
echo -ne " Attempt to stop and remove container(s), if running - "$GREEN"stopped"$EGREEN"${SAMELINE}"
- tmp="$(docker ps -aq --filter name=${3})" &> /dev/null
+ tmp=$(docker ps -aq --filter name=${3} --filter network=${DOCKER_SIM_NWNAME}) &> /dev/null
if [ $? -eq 0 ] && [ ! -z "$tmp" ]; then
docker rm $tmp &> ./tmp/.dockererr
if [ $? -ne 0 ]; then
@@ -977,266 +905,186 @@
return 0
}
-# The following sequence pull the configured images
-
-echo -e $BOLD"Pulling configured images, if needed"$EBOLD
-
-__check_included_image 'PA'
-if [ $? -eq 0 ]; then
- START_ARG_MOD=$START_ARG
- __check_image_local_override 'PA'
- if [ $? -eq 1 ]; then
- START_ARG_MOD="local"
- fi
- app="Policy Agent"; __check_and_pull_image $START_ARG_MOD "$app" $POLICY_AGENT_APP_NAME $POLICY_AGENT_IMAGE
-else
- echo -e $YELLOW" Excluding PA image from image check/pull"$EYELLOW
-fi
-
-__check_included_image 'ECS'
-if [ $? -eq 0 ]; then
- START_ARG_MOD=$START_ARG
- __check_image_local_override 'ECS'
- if [ $? -eq 1 ]; then
- START_ARG_MOD="local"
- fi
- app="ECS"; __check_and_pull_image $START_ARG_MOD "$app" $ECS_APP_NAME $ECS_IMAGE
-else
- echo -e $YELLOW" Excluding ECS image from image check/pull"$EYELLOW
-fi
-
-__check_included_image 'CP'
-if [ $? -eq 0 ]; then
- START_ARG_MOD=$START_ARG
- __check_image_local_override 'CP'
- if [ $? -eq 1 ]; then
- START_ARG_MOD="local"
- fi
- app="Non-RT RIC Control Panel"; __check_and_pull_image $START_ARG_MOD "$app" $CONTROL_PANEL_APP_NAME $CONTROL_PANEL_IMAGE
-else
- echo -e $YELLOW" Excluding Non-RT RIC Control Panel image from image check/pull"$EYELLOW
-fi
-
-__check_included_image 'RC'
-if [ $? -eq 0 ]; then
- START_ARG_MOD=$START_ARG
- __check_image_local_override 'RC'
- if [ $? -eq 1 ]; then
- START_ARG_MOD="local"
- fi
- app="RAPP Catalogue"; __check_and_pull_image $START_ARG_MOD "$app" $RAPP_CAT_APP_NAME $RAPP_CAT_IMAGE
-else
- echo -e $YELLOW" Excluding RAPP Catalogue image from image check/pull"$EYELLOW
-fi
-
-__check_included_image 'RICSIM'
-if [ $? -eq 0 ]; then
- START_ARG_MOD=$START_ARG
- __check_image_local_override 'RICSIM'
- if [ $? -eq 1 ]; then
- START_ARG_MOD="local"
- fi
- app="Near-RT RIC Simulator"; __check_and_pull_image $START_ARG_MOD "$app" $RIC_SIM_PREFIX"_"$RIC_SIM_BASE $RIC_SIM_IMAGE
-else
- echo -e $YELLOW" Excluding Near-RT RIC Simulator image from image check/pull"$EYELLOW
-fi
-
-
-__check_included_image 'CONSUL'
-if [ $? -eq 0 ]; then
- app="Consul"; __check_and_pull_image $START_ARG "$app" $CONSUL_APP_NAME $CONSUL_IMAGE
-else
- echo -e $YELLOW" Excluding Consul image from image check/pull"$EYELLOW
-fi
-
-__check_included_image 'CBS'
-if [ $? -eq 0 ]; then
- app="CBS"; __check_and_pull_image $START_ARG "$app" $CBS_APP_NAME $CBS_IMAGE
-else
- echo -e $YELLOW" Excluding CBS image from image check/pull"$EYELLOW
-fi
-
-__check_included_image 'SDNC'
-if [ $? -eq 0 ]; then
- START_ARG_MOD=$START_ARG
- __check_image_local_override 'SDNC'
- if [ $? -eq 1 ]; then
- START_ARG_MOD="local"
- fi
- app="SDNC A1 Controller"; __check_and_pull_image $START_ARG_MOD "$app" $SDNC_APP_NAME $SDNC_A1_CONTROLLER_IMAGE
- app="SDNC DB"; __check_and_pull_image $START_ARG "$app" $SDNC_APP_NAME $SDNC_DB_IMAGE
-else
- echo -e $YELLOW" Excluding SDNC image and related DB image from image check/pull"$EYELLOW
-fi
-
-__check_included_image 'HTTPPROXY'
-if [ $? -eq 0 ]; then
- app="HTTPPROXY"; __check_and_pull_image $START_ARG "$app" $HTTP_PROXY_APP_NAME $HTTP_PROXY_IMAGE
-else
- echo -e $YELLOW" Excluding Http Proxy image from image check/pull"$EYELLOW
-fi
-
-__check_included_image 'DMAAPMR'
-if [ $? -eq 0 ]; then
- app="DMAAP Message Router"; __check_and_pull_image $START_ARG "$app" $MR_DMAAP_APP_NAME $ONAP_DMAAPMR_IMAGE
- app="ZooKeeper"; __check_and_pull_image $START_ARG "$app" $MR_ZOOKEEPER_APP_NAME $ONAP_ZOOKEEPER_IMAGE
- app="Kafka"; __check_and_pull_image $START_ARG "$app" $MR_KAFKA_APP_NAME $ONAP_KAFKA_IMAGE
-else
- echo -e $YELLOW" Excluding DMAAP MR image and images (zookeeper, kafka) from image check/pull"$EYELLOW
-fi
-
-#Errors in image setting - exit
-if [ $IMAGE_ERR -ne 0 ]; then
+setup_testenvironment() {
+ # Check that image env setting are available
echo ""
- echo "#################################################################################################"
- echo -e $RED"One or more images could not be pulled or containers using the images could not be stopped/removed"$ERED
- echo -e $RED"Or local image, overriding remote image, does not exist"$ERED
- if [ $IMAGE_CATEGORY == "DEV" ]; then
- echo -e $RED"Note that SNAPSHOT images may be purged from nexus after a certain period."$ERED
- echo -e $RED"In that case, switch to use a released image instead."$ERED
+
+ # Image var setup for all project images included in the test
+ for imagename in $APP_SHORT_NAMES; do
+ __check_included_image $imagename
+ incl=$?
+ __check_project_image $imagename
+ proj=$?
+ if [ $incl -eq 0 ]; then
+ if [ $proj -eq 0 ]; then
+ IMAGE_SUFFIX=$(__check_image_override $imagename)
+ if [ $? -ne 0 ]; then
+ echo -e $RED"Image setting from cmd line not consistent for $imagename."$ERED
+ ((IMAGE_ERR++))
+ fi
+ else
+ IMAGE_SUFFIX="none"
+ fi
+ # A function name is created from the app short name
+ # for example app short name 'ECS' -> produce the function
+ # name __ECS_imagesetup
+ # This function is called and is expected to exist in the imported
+ # file for the ecs test functions
+ # The resulting function impl will call '__check_and_create_image_var' function
+ # with appropriate parameters
+ # If the image suffix is none, then the component decides the suffix
+ function_pointer="__"$imagename"_imagesetup"
+ $function_pointer $IMAGE_SUFFIX
+ fi
+ done
+
+ #Errors in image setting - exit
+ if [ $IMAGE_ERR -ne 0 ]; then
+ exit 1
fi
- echo "#################################################################################################"
+
+ #Print a tables of the image settings
+ echo -e $BOLD"Images configured for start arg: "$START_ARG $EBOLD
+ column -t -s $'\t' $image_list_file | indent1
+
echo ""
- exit 1
-fi
-echo ""
-
-echo -e $BOLD"Building images needed for test"$EBOLD
-
-curdir=$PWD
-__check_included_image 'MR'
-if [ $? -eq 0 ]; then
- cd $curdir
- cd ../mrstub
- echo " Building mrstub image: $MRSTUB_IMAGE"
- docker build --build-arg NEXUS_PROXY_REPO=$NEXUS_PROXY_REPO -t $MRSTUB_IMAGE . &> .dockererr
- if [ $? -eq 0 ]; then
- echo -e $GREEN" Build Ok"$EGREEN
+ #Set the SIM_GROUP var
+ echo -e $BOLD"Setting var to main dir of all container/simulator scripts"$EBOLD
+ if [ -z "$SIM_GROUP" ]; then
+ SIM_GROUP=$AUTOTEST_HOME/../simulator-group
+ if [ ! -d $SIM_GROUP ]; then
+ echo "Trying to set env var SIM_GROUP to dir 'simulator-group' in the nontrtric repo, but failed."
+ echo -e $RED"Please set the SIM_GROUP manually in the applicable $TEST_ENV_VAR_FILE"$ERED
+ exit 1
+ else
+ echo " SIM_GROUP auto set to: " $SIM_GROUP
+ fi
+ elif [ $SIM_GROUP = *simulator_group ]; then
+ echo -e $RED"Env var SIM_GROUP does not seem to point to dir 'simulator-group' in the repo, check $TEST_ENV_VAR_FILE"$ERED
+ exit 1
else
- echo -e $RED" Build Failed"$ERED
- ((RES_CONF_FAIL++))
- cat .dockererr
- echo -e $RED"Exiting...."$ERED
+ echo " SIM_GROUP env var already set to: " $SIM_GROUP
+ fi
+
+ echo ""
+
+ #Temp var to check for image pull errors
+ IMAGE_ERR=0
+
+ # The following sequence pull the configured images
+
+ echo -e $BOLD"Pulling configured images, if needed"$EBOLD
+
+ for imagename in $APP_SHORT_NAMES; do
+ __check_included_image $imagename
+ incl=$?
+ __check_project_image $imagename
+ proj=$?
+ if [ $incl -eq 0 ]; then
+ if [ $proj -eq 0 ]; then
+ START_ARG_MOD=$START_ARG
+ __check_image_local_override $imagename
+ if [ $? -eq 1 ]; then
+ START_ARG_MOD="local"
+ fi
+ else
+ START_ARG_MOD=$START_ARG
+ fi
+ __check_image_local_build $imagename
+ #No pull of images built locally
+ if [ $? -ne 0 ]; then
+ # A function name is created from the app short name
+ # for example app short name 'HTTPPROXY' -> produce the function
+ # name __HTTPPROXY_imagesetup
+ # This function is called and is expected to exist in the imported
+ # file for the httpproxy test functions
+ # The resulting function impl will call '__check_and_pull_image' function
+ # with appropriate parameters
+ function_pointer="__"$imagename"_imagepull"
+ $function_pointer $START_ARG_MOD $START_ARG
+ fi
+ else
+ echo -e $YELLOW" Excluding $imagename image from image check/pull"$EYELLOW
+ fi
+ done
+
+ #Errors in image setting - exit
+ if [ $IMAGE_ERR -ne 0 ]; then
+ echo ""
+ echo "#################################################################################################"
+ echo -e $RED"One or more images could not be pulled or containers using the images could not be stopped/removed"$ERED
+ echo -e $RED"Or local image, overriding remote image, does not exist"$ERED
+ if [ $IMAGE_CATEGORY == "DEV" ]; then
+ echo -e $RED"Note that SNAPSHOT images may be purged from nexus after a certain period."$ERED
+ echo -e $RED"In that case, switch to use a released image instead."$ERED
+ fi
+ echo "#################################################################################################"
+ echo ""
exit 1
fi
- cd $curdir
-else
- echo -e $YELLOW" Excluding mrstub from image build"$EYELLOW
-fi
-__check_included_image 'CR'
-if [ $? -eq 0 ]; then
- cd ../cr
- echo " Building Callback Receiver image: $CR_IMAGE"
- docker build --build-arg NEXUS_PROXY_REPO=$NEXUS_PROXY_REPO -t $CR_IMAGE . &> .dockererr
- if [ $? -eq 0 ]; then
- echo -e $GREEN" Build Ok"$EGREEN
- else
- echo -e $RED" Build Failed"$ERED
- ((RES_CONF_FAIL++))
- cat .dockererr
- echo -e $RED"Exiting...."$ERED
- exit 1
- fi
- cd $curdir
-else
- echo -e $YELLOW" Excluding Callback Receiver from image build"$EYELLOW
-fi
+ echo ""
-__check_included_image 'PRODSTUB'
-if [ $? -eq 0 ]; then
- cd ../prodstub
- echo " Building Producer stub image: $PROD_STUB_IMAGE"
- docker build --build-arg NEXUS_PROXY_REPO=$NEXUS_PROXY_REPO -t $PROD_STUB_IMAGE . &> .dockererr
- if [ $? -eq 0 ]; then
- echo -e $GREEN" Build Ok"$EGREEN
- else
- echo -e $RED" Build Failed"$ERED
- ((RES_CONF_FAIL++))
- cat .dockererr
- echo -e $RED"Exiting...."$ERED
- exit 1
- fi
- cd $curdir
-else
- echo -e $YELLOW" Excluding Producer stub from image build"$EYELLOW
-fi
+ echo -e $BOLD"Building images needed for test"$EBOLD
-echo ""
+ for imagename in $APP_SHORT_NAMES; do
+ cd $AUTOTEST_HOME #Always reset to orig dir
+ __check_image_local_build $imagename
+ if [ $? -eq 0 ]; then
+ __check_included_image $imagename
+ if [ $? -eq 0 ]; then
+ # A function name is created from the app short name
+ # for example app short name 'MR' -> produce the function
+ # name __MR_imagebuild
+ # This function is called and is expected to exist in the imported
+ # file for the mr test functions
+ # The resulting function impl shall build the imagee
+ function_pointer="__"$imagename"_imagebuild"
+ $function_pointer
-# Create a table of the images used in the script
-echo -e $BOLD"Local docker registry images used in the this test script"$EBOLD
+ else
+ echo -e $YELLOW" Excluding image for app $imagename from image build"$EYELLOW
+ fi
+ fi
+ done
-docker_tmp_file=./tmp/.docker-images-table
-format_string="{{.Repository}}\\t{{.Tag}}\\t{{.CreatedSince}}\\t{{.Size}}\\t{{.CreatedAt}}"
-echo -e " Application\tRepository\tTag\tCreated since\tSize\tCreated at" > $docker_tmp_file
+ cd $AUTOTEST_HOME # Just to make sure...
-__check_included_image 'PA'
-if [ $? -eq 0 ]; then
- echo -e " Policy Agent\t$(docker images --format $format_string $POLICY_AGENT_IMAGE)" >> $docker_tmp_file
-fi
+ echo ""
-__check_included_image 'ECS'
-if [ $? -eq 0 ]; then
- echo -e " ECS\t$(docker images --format $format_string $ECS_IMAGE)" >> $docker_tmp_file
-fi
-__check_included_image 'CP'
-if [ $? -eq 0 ]; then
- echo -e " Control Panel\t$(docker images --format $format_string $CONTROL_PANEL_IMAGE)" >> $docker_tmp_file
-fi
-__check_included_image 'RICSIM'
-if [ $? -eq 0 ]; then
- echo -e " RIC Simulator\t$(docker images --format $format_string $RIC_SIM_IMAGE)" >> $docker_tmp_file
-fi
-__check_included_image 'RC'
-if [ $? -eq 0 ]; then
- echo -e " RAPP Catalogue\t$(docker images --format $format_string $RAPP_CAT_IMAGE)" >> $docker_tmp_file
-fi
-__check_included_image 'MR'
-if [ $? -eq 0 ]; then
- echo -e " Message Router stub\t$(docker images --format $format_string $MRSTUB_IMAGE)" >> $docker_tmp_file
-fi
-__check_included_image 'DMAAPMR'
-if [ $? -eq 0 ]; then
- echo -e " DMAAP Message Router\t$(docker images --format $format_string $ONAP_DMAAPMR_IMAGE)" >> $docker_tmp_file
- echo -e " ZooKeeper\t$(docker images --format $format_string $ONAP_ZOOKEEPER_IMAGE)" >> $docker_tmp_file
- echo -e " Kafka\t$(docker images --format $format_string $ONAP_KAFKA_IMAGE)" >> $docker_tmp_file
-fi
-__check_included_image 'CR'
-if [ $? -eq 0 ]; then
- echo -e " Callback Receiver\t$(docker images --format $format_string $CR_IMAGE)" >> $docker_tmp_file
-fi
-__check_included_image 'PRODSTUB'
-if [ $? -eq 0 ]; then
- echo -e " Producer stub\t$(docker images --format $format_string $PROD_STUB_IMAGE)" >> $docker_tmp_file
-fi
-__check_included_image 'CONSUL'
-if [ $? -eq 0 ]; then
- echo -e " Consul\t$(docker images --format $format_string $CONSUL_IMAGE)" >> $docker_tmp_file
-fi
-__check_included_image 'CBS'
-if [ $? -eq 0 ]; then
- echo -e " CBS\t$(docker images --format $format_string $CBS_IMAGE)" >> $docker_tmp_file
-fi
-__check_included_image 'SDNC'
-if [ $? -eq 0 ]; then
- echo -e " SDNC A1 Controller\t$(docker images --format $format_string $SDNC_A1_CONTROLLER_IMAGE)" >> $docker_tmp_file
- echo -e " SDNC DB\t$(docker images --format $format_string $SDNC_DB_IMAGE)" >> $docker_tmp_file
-fi
-__check_included_image 'HTTPPROXY'
-if [ $? -eq 0 ]; then
- echo -e " Http Proxy\t$(docker images --format $format_string $HTTP_PROXY_IMAGE)" >> $docker_tmp_file
-fi
+ # Create a table of the images used in the script
+ echo -e $BOLD"Local docker registry images used in the this test script"$EBOLD
-column -t -s $'\t' $docker_tmp_file
+ docker_tmp_file=./tmp/.docker-images-table
+ format_string="{{.Repository}}\\t{{.Tag}}\\t{{.CreatedSince}}\\t{{.Size}}\\t{{.CreatedAt}}"
+ echo -e "Application\tRepository\tTag\tCreated since\tSize\tCreated at" > $docker_tmp_file
-echo ""
+ for imagename in $APP_SHORT_NAMES; do
+ __check_included_image $imagename
+ if [ $? -eq 0 ]; then
+ # A function name is created from the app short name
+ # for example app short name 'MR' -> produce the function
+ # name __MR_imagebuild
+ # This function is called and is expected to exist in the imported
+ # file for the mr test functions
+ # The resulting function impl shall build the imagee
+ function_pointer="__"$imagename"_image_data"
+ $function_pointer "$format_string" $docker_tmp_file
+ fi
+ done
-echo -e $BOLD"======================================================="$EBOLD
-echo -e $BOLD"== Common test setup completed - test script begins =="$EBOLD
-echo -e $BOLD"======================================================="$EBOLD
-echo ""
+
+ column -t -s $'\t' $docker_tmp_file | indent1
+
+ echo ""
+
+ echo -e $BOLD"======================================================="$EBOLD
+ echo -e $BOLD"== Common test setup completed - test script begins =="$EBOLD
+ echo -e $BOLD"======================================================="$EBOLD
+ echo ""
+
+}
# Function to print the test result, shall be the last cmd in a test script
# args: -
@@ -1306,7 +1154,7 @@
echo " - "$ATC " -- "$TC_ONELINE_DESCR" Execution time: "$duration" seconds" >> .tmp_tcsuite_pass
fi
#Create file with OK exit code
- echo "0" > "$PWD/.result$ATC.txt"
+ echo "0" > "$AUTOTEST_HOME/.result$ATC.txt"
else
echo -e "One or more tests with status \033[31m\033[1mFAIL\033[0m "
echo -e "\033[31m\033[1m ___ _ ___ _ \033[0m"
@@ -1420,62 +1268,97 @@
return 0
}
-# Check if app name var is set. If so return the app name otherwise return "NOTSET"
-__check_app_name() {
- if [ $# -eq 1 ]; then
- echo $1
- else
- echo "NOTSET"
- fi
-}
-
# Stop and remove all containers
# args: -
# (Not for test scripts)
__clean_containers() {
- echo -e $BOLD"Stopping and removing all running containers, by container name"$EBOLD
+ echo -e $BOLD"Docker clean and stopping and removing all running containers, by container name"$EBOLD
- CONTAINTER_NAMES=("Policy Agent " $(__check_app_name $POLICY_AGENT_APP_NAME)\
- "ECS " $(__check_app_name $ECS_APP_NAME)\
- "RAPP Catalogue " $(__check_app_name $RAPP_CAT_APP_NAME)\
- "Non-RT RIC Simulator(s)" $(__check_app_name $RIC_SIM_PREFIX)\
- "Message Router stub " $(__check_app_name $MR_STUB_APP_NAME)\
- "DMAAP Message Router " $(__check_app_name $MR_DMAAP_APP_NAME)\
- "Zookeeper " $(__check_app_name $MR_ZOOKEEPER_APP_NAME)\
- "Kafka " $(__check_app_name $MR_KAFKA_APP_NAME)\
- "Callback Receiver " $(__check_app_name $CR_APP_NAME)\
- "Producer stub " $(__check_app_name $PROD_STUB_APP_NAME)\
- "Control Panel " $(__check_app_name $CONTROL_PANEL_APP_NAME)\
- "SDNC A1 Controller " $(__check_app_name $SDNC_APP_NAME)\
- "SDNC DB " $(__check_app_name $SDNC_DB_APP_NAME)\
- "CBS " $(__check_app_name $CBS_APP_NAME)\
- "Consul " $(__check_app_name $CONSUL_APP_NAME)\
- "Http Proxy " $(__check_app_name $HTTP_PROXY_APP_NAME))
+ #Create empty file
+ running_contr_file="./tmp/running_contr.txt"
+ > $running_contr_file
- nw=0 # Calc max width of container name, to make a nice table
- for (( i=1; i<${#CONTAINTER_NAMES[@]} ; i+=2 )) ; do
-
- if [ ${#CONTAINTER_NAMES[i]} -gt $nw ]; then
- nw=${#CONTAINTER_NAMES[i]}
- fi
+ # Get list of all containers started by the test script
+ for imagename in $APP_SHORT_NAMES; do
+ docker ps -a --filter "label=nrttest_app=$imagename" --filter "network=$DOCKER_SIM_NWNAME" --format ' {{.Label "nrttest_dp"}}\n{{.Label "nrttest_app"}}\n{{.Names}}' >> $running_contr_file
done
- for (( i=0; i<${#CONTAINTER_NAMES[@]} ; i+=2 )) ; do
- APP="${CONTAINTER_NAMES[i]}"
- CONTR="${CONTAINTER_NAMES[i+1]}"
- if [ $CONTR != "NOTSET" ]; then
- for((w=${#CONTR}; w<$nw; w=w+1)); do
- CONTR="$CONTR "
- done
- echo -ne " $APP: $CONTR - ${GREEN}stopping${EGREEN}${SAMELINE}"
- docker stop $(docker ps -qa --filter name=${CONTR}) &> /dev/null
- echo -ne " $APP: $CONTR - ${GREEN}stopped${EGREEN}${SAMELINE}"
- docker rm --force $(docker ps -qa --filter name=${CONTR}) &> /dev/null
- echo -e " $APP: $CONTR - ${GREEN}stopped removed${EGREEN}"
+ tab_heading1="App display name"
+ tab_heading2="App short name"
+ tab_heading3="Container name"
+
+ tab_heading1_len=${#tab_heading1}
+ tab_heading2_len=${#tab_heading2}
+ tab_heading3_len=${#tab_heading3}
+ cntr=0
+ #Calc field lengths of each item in the list of containers
+ while read p; do
+ if (( $cntr % 3 == 0 ));then
+ if [ ${#p} -gt $tab_heading1_len ]; then
+ tab_heading1_len=${#p}
+ fi
fi
+ if (( $cntr % 3 == 1));then
+ if [ ${#p} -gt $tab_heading2_len ]; then
+ tab_heading2_len=${#p}
+ fi
+ fi
+ if (( $cntr % 3 == 2));then
+ if [ ${#p} -gt $tab_heading3_len ]; then
+ tab_heading3_len=${#p}
+ fi
+ fi
+ let cntr=cntr+1
+ done <$running_contr_file
+
+ let tab_heading1_len=tab_heading1_len+2
+ while (( ${#tab_heading1} < $tab_heading1_len)); do
+ tab_heading1="$tab_heading1"" "
done
+ let tab_heading2_len=tab_heading2_len+2
+ while (( ${#tab_heading2} < $tab_heading2_len)); do
+ tab_heading2="$tab_heading2"" "
+ done
+
+ let tab_heading3_len=tab_heading3_len+2
+ while (( ${#tab_heading3} < $tab_heading3_len)); do
+ tab_heading3="$tab_heading3"" "
+ done
+
+ echo " $tab_heading1$tab_heading2$tab_heading3"" Actions"
+ cntr=0
+ while read p; do
+ if (( $cntr % 3 == 0 ));then
+ row=""
+ heading=$p
+ heading_len=$tab_heading1_len
+ fi
+ if (( $cntr % 3 == 1));then
+ heading=$p
+ heading_len=$tab_heading2_len
+ fi
+ if (( $cntr % 3 == 2));then
+ contr=$p
+ heading=$p
+ heading_len=$tab_heading3_len
+ fi
+ while (( ${#heading} < $heading_len)); do
+ heading="$heading"" "
+ done
+ row=$row$heading
+ if (( $cntr % 3 == 2));then
+ echo -ne $row$SAMELINE
+ echo -ne " $row ${GREEN}stopping...${EGREEN}${SAMELINE}"
+ docker stop $(docker ps -qa --filter name=${contr} --filter network=$DOCKER_SIM_NWNAME) &> /dev/null
+ echo -ne " $row ${GREEN}stopped removing...${EGREEN}${SAMELINE}"
+ docker rm --force $(docker ps -qa --filter name=${contr} --filter network=$DOCKER_SIM_NWNAME) &> /dev/null
+ echo -e " $row ${GREEN}stopped removed ${EGREEN}"
+ fi
+ let cntr=cntr+1
+ done <$running_contr_file
+
echo ""
echo -e $BOLD" Removing docker network"$EBOLD
@@ -1634,7 +1517,7 @@
namespace=$1
labelname=$2
labelid=$3
- resources="deployments replicaset statefulset services pods configmaps pvc"
+ resources="deployments replicaset statefulset services pods configmaps persistentvolumeclaims persistentvolumes"
deleted_resourcetypes=""
for restype in $resources; do
result=$(kubectl get $restype -n $namespace -o jsonpath='{.items[?(@.metadata.labels.'$labelname'=="'$labelid'")].metadata.name}')
@@ -1733,17 +1616,6 @@
return 1
}
-# Translate ric name to kube host name
-# args: <ric-name>
-# For test scripts
-get_kube_sim_host() {
- name=$(echo "$1" | tr '_' '-') #kube does not accept underscore in names
- #example gnb_1_2 -> gnb-1-2
- set_name=$(echo $name | rev | cut -d- -f2- | rev) # Cut index part of ric name to get the name of statefulset
- # example gnb-g1-2 -> gnb-g1 where gnb-g1-2 is the ric name and gnb-g1 is the set name
- echo $name"."$set_name"."$KUBE_NONRTRIC_NAMESPACE
-}
-
# Find the named port to an app (using the service resource)
# args: <app-name> <namespace> <port-name>
# (Not for test scripts)
@@ -1769,6 +1641,31 @@
return 1
}
+# Find the named node port to an app (using the service resource)
+# args: <app-name> <namespace> <port-name>
+# (Not for test scripts)
+__kube_get_service_nodeport() {
+ if [ $# -ne 3 ]; then
+ ((RES_CONF_FAIL++))
+ __print_err "need 3 args, <app-name> <namespace> <port-name>" $@
+ exit 1
+ fi
+
+ for timeout in {1..60}; do
+ port=$(kubectl get svc $1 -n $2 -o jsonpath='{...ports[?(@.name=="'$3'")].nodePort}')
+ if [ $? -eq 0 ]; then
+ if [ ! -z "$port" ]; then
+ echo $port
+ return 0
+ fi
+ fi
+ sleep 0.5
+ done
+ ((RES_CONF_FAIL++))
+ echo "0"
+ return 1
+}
+
# Create a kube resource from a yaml template
# args: <resource-type> <resource-name> <template-yaml> <output-yaml>
# (Not for test scripts)
@@ -1828,142 +1725,44 @@
echo -e $BOLD"Initialize kube services//pods/statefulsets/replicaset to initial state"$EBOLD
# Scale prestarted or managed apps
- __check_prestarted_image 'RICSIM'
- if [ $? -eq 0 ]; then
- echo -e " Scaling all kube resources for app $BOLD RICSIM $EBOLD to 0"
- __kube_scale_and_wait_all_resources $KUBE_NONRTRIC_NAMESPACE app nonrtric-a1simulator
- else
- echo -e " Scaling all kube resources for app $BOLD RICSIM $EBOLD to 0"
- __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest RICSIM
- fi
+ for imagename in $APP_SHORT_NAMES; do
+ __check_included_image $imagename
+ if [ $? -eq 0 ]; then
+ # A function name is created from the app short name
+ # for example app short name 'RICMSIM' -> produce the function
+ # name __RICSIM_kube_scale_zero or __RICSIM_kube_scale_zero_and_wait
+ # This function is called and is expected to exist in the imported
+ # file for the ricsim test functions
+ # The resulting function impl shall scale the resources to 0
+ __check_prestarted_image $imagename
+ if [ $? -eq 0 ]; then
+ function_pointer="__"$imagename"_kube_scale_zero_and_wait"
+ else
+ function_pointer="__"$imagename"_kube_scale_zero"
+ fi
+ echo -e " Scaling all kube resources for app $BOLD $imagename $EBOLD to 0"
+ $function_pointer
+ fi
+ done
- __check_prestarted_image 'PA'
- if [ $? -eq 0 ]; then
- echo -e " Scaling all kube resources for app $BOLD PA $EBOLD to 0"
- __kube_scale_and_wait_all_resources $KUBE_NONRTRIC_NAMESPACE app nonrtric-policymanagementservice
- else
- echo -e " Scaling all kube resources for app $BOLD PA $EBOLD to 0"
- __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest PA
- fi
-
- __check_prestarted_image 'ECS'
- if [ $? -eq 0 ]; then
- echo -e " Scaling all kube resources for app $BOLD ECS $EBOLD to 0"
- __kube_scale_and_wait_all_resources $KUBE_NONRTRIC_NAMESPACE app nonrtric-enrichmentservice
- else
- echo -e " Scaling all kube resources for app $BOLD ECS $EBOLD to 0"
- __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest ECS
- fi
-
- __check_prestarted_image 'RC'
- if [ $? -eq 0 ]; then
- echo -e " Scaling all kube resources for app $BOLD RC $EBOLD to 0"
- __kube_scale_and_wait_all_resources $KUBE_NONRTRIC_NAMESPACE app nonrtric-rappcatalogueservice
- else
- echo -e " Scaling all kube resources for app $BOLD RC $EBOLD to 0"
- __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest RC
- fi
-
- __check_prestarted_image 'CP'
- if [ $? -eq 0 ]; then
- echo -e " CP replicas kept as is"
- else
- echo -e " Scaling all kube resources for app $BOLD CP $EBOLD to 0"
- __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest CP
- fi
-
- __check_prestarted_image 'SDNC'
- if [ $? -eq 0 ]; then
- echo -e " SDNC replicas kept as is"
- else
- echo -e " Scaling all kube resources for app $BOLD SDNC $EBOLD to 0"
- __kube_scale_all_resources $KUBE_NONRTRIC_NAMESPACE autotest SDNC
- fi
-
- __check_prestarted_image 'MR'
- if [ $? -eq 0 ]; then
- echo -e " MR replicas kept as is"
- else
- echo -e " Scaling all kube resources for app $BOLD MR $EBOLD to 0"
- __kube_scale_all_resources $KUBE_ONAP_NAMESPACE autotest MR
- fi
-
- __check_prestarted_image 'DMAAPMR'
- if [ $? -eq 0 ]; then
- echo -e " DMAAP replicas kept as is"
- else
- echo -e " Scaling all kube resources for app $BOLD DMAAPMR $EBOLD to 0"
- __kube_scale_all_resources $KUBE_ONAP_NAMESPACE autotest DMAAPMR
- fi
-
- echo -e " Scaling all kube resources for app $BOLD CR $EBOLD to 0"
- __kube_scale_all_resources $KUBE_SIM_NAMESPACE autotest CR
-
- echo -e " Scaling all kube resources for app $BOLD PRODSTUB $EBOLD to 0"
- __kube_scale_all_resources $KUBE_SIM_NAMESPACE autotest PRODSTUB
-
- echo -e " Scaling all kube resources for app $BOLD HTTPPROXY $EBOLD to 0"
- __kube_scale_all_resources $KUBE_SIM_NAMESPACE autotest HTTPPROXY
-
-
- ## Clean all managed apps
-
- __check_prestarted_image 'RICSIM'
- if [ $? -eq 1 ]; then
- echo -e " Deleting all kube resources for app $BOLD RICSIM $EBOLD"
- __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest RICSIM
- fi
-
- __check_prestarted_image 'PA'
- if [ $? -eq 1 ]; then
- echo -e " Deleting all kube resources for app $BOLD PA $EBOLD"
- __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest PA
- fi
-
- __check_prestarted_image 'ECS'
- if [ $? -eq 1 ]; then
- echo -e " Deleting all kube resources for app $BOLD ECS $EBOLD"
- __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest ECS
- fi
-
- __check_prestarted_image 'RC'
- if [ $? -eq 1 ]; then
- echo -e " Deleting all kube resources for app $BOLD RC $EBOLD"
- __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest RC
- fi
-
- __check_prestarted_image 'CP'
- if [ $? -eq 1 ]; then
- echo -e " Deleting all kube resources for app $BOLD CP $EBOLD"
- __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest CP
- fi
-
- __check_prestarted_image 'SDNC'
- if [ $? -eq 1 ]; then
- echo -e " Deleting all kube resources for app $BOLD SDNC $EBOLD"
- __kube_delete_all_resources $KUBE_NONRTRIC_NAMESPACE autotest SDNC
- fi
-
- __check_prestarted_image 'MR'
- if [ $? -eq 1 ]; then
- echo -e " Deleting all kube resources for app $BOLD MR $EBOLD"
- __kube_delete_all_resources $KUBE_ONAP_NAMESPACE autotest MR
- fi
-
- __check_prestarted_image 'DMAAPMR'
- if [ $? -eq 1 ]; then
- echo -e " Deleting all kube resources for app $BOLD DMAAPMR $EBOLD"
- __kube_delete_all_resources $KUBE_ONAP_NAMESPACE autotest DMAAPMR
- fi
-
- echo -e " Deleting all kube resources for app $BOLD CR $EBOLD"
- __kube_delete_all_resources $KUBE_SIM_NAMESPACE autotest CR
-
- echo -e " Deleting all kube resources for app $BOLD PRODSTUB $EBOLD"
- __kube_delete_all_resources $KUBE_SIM_NAMESPACE autotest PRODSTUB
-
- echo -e " Deleting all kube resources for app $BOLD HTTPPROXY $EBOLD"
- __kube_delete_all_resources $KUBE_SIM_NAMESPACE autotest HTTPPROXY
+ # Delete managed apps
+ for imagename in $APP_SHORT_NAMES; do
+ __check_included_image $imagename
+ if [ $? -eq 0 ]; then
+ __check_prestarted_image $imagename
+ if [ $? -ne 0 ]; then
+ # A function name is created from the app short name
+ # for example app short name 'RICMSIM' -> produce the function
+ # name __RICSIM__kube_delete_all
+ # This function is called and is expected to exist in the imported
+ # file for the ricsim test functions
+ # The resulting function impl shall delete all its resources
+ function_pointer="__"$imagename"_kube_delete_all"
+ echo -e " Deleting all kube resources for app $BOLD $imagename $EBOLD"
+ $function_pointer
+ fi
+ fi
+ done
echo ""
}
@@ -2023,50 +1822,6 @@
((RES_CONF_FAIL++))
}
-
-# Helper function to get a the port of a specific ric simulator
-# args: <ric-id>
-# (Not for test scripts)
-__find_sim_port() {
- name=$1" " #Space appended to prevent matching 10 if 1 is desired....
- cmdstr="docker inspect --format='{{(index (index .NetworkSettings.Ports \"$RIC_SIM_PORT/tcp\") 0).HostPort}}' ${name}"
- res=$(eval $cmdstr)
- if [[ "$res" =~ ^[0-9]+$ ]]; then
- echo $res
- else
- echo "0"
- fi
-}
-
-# Helper function to get a the port and host name of a specific ric simulator
-# args: <ric-id>
-# (Not for test scripts)
-__find_sim_host() {
- if [ $RUNMODE == "KUBE" ]; then
- ricname=$(echo "$1" | tr '_' '-')
- for timeout in {1..60}; do
- host=$(kubectl get pod $ricname -n $KUBE_NONRTRIC_NAMESPACE -o jsonpath='{.status.podIP}' 2> /dev/null)
- if [ ! -z "$host" ]; then
- echo $RIC_SIM_HTTPX"://"$host":"$RIC_SIM_PORT
- return 0
- fi
- sleep 0.5
- done
- echo "host-not-found-fatal-error"
- else
- name=$1" " #Space appended to prevent matching 10 if 1 is desired....
- cmdstr="docker inspect --format='{{(index (index .NetworkSettings.Ports \"$RIC_SIM_PORT/tcp\") 0).HostPort}}' ${name}"
- res=$(eval $cmdstr)
- if [[ "$res" =~ ^[0-9]+$ ]]; then
- echo $RIC_SIM_HOST:$res
- return 0
- else
- echo "0"
- fi
- fi
- return 1
-}
-
# Function to create the docker network for the test
# Not to be called from the test script itself.
__create_docker_network() {
@@ -2090,12 +1845,14 @@
}
# Function to start container with docker-compose and wait until all are in state running.
-#args: <docker-compose-dir> <docker-compose-arg>|NODOCKERARGS <count> <app-name>+
+# If the <docker-compose-file> is empty, the default 'docker-compose.yml' is assumed.
+#args: <docker-compose-dir> <docker-compose-file> <docker-compose-arg>|NODOCKERARGS <count> <app-name>+
# (Not for test scripts)
__start_container() {
- if [ $# -lt 4 ]; then
+
+ if [ $# -lt 5 ]; then
((RES_CONF_FAIL++))
- __print_err "need 4 or more args, <docker-compose-dir> <docker-compose-arg>|NODOCKERARGS <count> <app-name>+" $@
+ __print_err "need 5 or more args, <docker-compose-dir> <docker-compose-file> <docker-compose-arg>|NODOCKERARGS <count> <app-name>+" $@
exit 1
fi
@@ -2106,13 +1863,18 @@
compose_dir=$1
cd $1
shift
+ compose_file=$1
+ if [ -z "$compose_file" ]; then
+ compose_file="docker-compose.yml"
+ fi
+ shift
compose_args=$1
shift
appcount=$1
shift
if [ "$compose_args" == "NODOCKERARGS" ]; then
- docker-compose up -d &> .dockererr
+ docker-compose -f $compose_file up -d &> .dockererr
if [ $? -ne 0 ]; then
echo -e $RED"Problem to launch container(s) with docker-compose"$ERED
cat .dockererr
@@ -2120,7 +1882,7 @@
exit 1
fi
else
- docker-compose up -d $compose_args &> .dockererr
+ docker-compose -f $compose_file up -d $compose_args &> .dockererr
if [ $? -ne 0 ]; then
echo -e $RED"Problem to launch container(s) with docker-compose"$ERED
cat .dockererr
@@ -2157,14 +1919,6 @@
return 0
}
-# Generate a UUID to use as prefix for policy ids
-generate_uuid() {
- UUID=$(python3 -c 'import sys,uuid; sys.stdout.write(uuid.uuid4().hex)')
- #Reduce length to make space for serial id, uses 'a' as marker where the serial id is added
- UUID=${UUID:0:${#UUID}-4}"a"
-}
-
-
# Function to check if container/service is responding to http/https
# args: <container-name>|<service-name> url
# (Not for test scripts)
@@ -2189,7 +1943,8 @@
pa_st=false
echo -ne " Waiting for ${ENTITY} ${appname} service status...${SAMELINE}"
TSTART=$SECONDS
- for i in {1..50}; do
+ loop_ctr=0
+ while (( $TSTART+600 > $SECONDS )); do
result="$(__do_curl $url)"
if [ $? -eq 0 ]; then
if [ ${#result} -gt 15 ]; then
@@ -2202,11 +1957,16 @@
break
else
TS_TMP=$SECONDS
- while [ $(($TS_TMP+$i)) -gt $SECONDS ]; do
- echo -ne " Waiting for ${ENTITY} ${appname} service status on ${url}...$(($SECONDS-$TSTART)) seconds, retrying in $(($TS_TMP+$i-$SECONDS)) seconds ${SAMELINE}"
+ TS_OFFSET=$loop_ctr
+ if (( $TS_OFFSET > 5 )); then
+ TS_OFFSET=5
+ fi
+ while [ $(($TS_TMP+$TS_OFFSET)) -gt $SECONDS ]; do
+ echo -ne " Waiting for ${ENTITY} ${appname} service status on ${url}...$(($SECONDS-$TSTART)) seconds, retrying in $(($TS_TMP+$TS_OFFSET-$SECONDS)) seconds ${SAMELINE}"
sleep 1
done
fi
+ let loop_ctr=loop_ctr+1
done
if [ "$pa_st" = "false" ]; then
@@ -2224,26 +1984,6 @@
### Log functions
#################
-# Check the agent logs for WARNINGs and ERRORs
-# args: -
-# (Function for test scripts)
-
-check_policy_agent_logs() {
- __check_container_logs "Policy Agent" $POLICY_AGENT_APP_NAME $POLICY_AGENT_LOGPATH WARN ERR
-}
-
-check_ecs_logs() {
- __check_container_logs "ECS" $ECS_APP_NAME $ECS_LOGPATH WARN ERR
-}
-
-check_control_panel_logs() {
- __check_container_logs "Control Panel" $CONTROL_PANEL_APP_NAME $CONTROL_PANEL_LOGPATH WARN ERR
-}
-
-check_sdnc_logs() {
- __check_container_logs "SDNC A1 Controller" $SDNC_APP_NAME $SDNC_KARAF_LOG WARN ERROR
-}
-
__check_container_logs() {
dispname=$1
@@ -2262,7 +2002,7 @@
#tmp=$(docker ps | grep $appname)
tmp=$(docker ps -q --filter name=$appname) #get the container id
if [ -z "$tmp" ]; then #Only check logs for running Policy Agent apps
- echo $dispname" is not running, no check made"
+ echo " "$dispname" is not running, no check made"
return
fi
foundentries="$(docker exec -t $tmp grep $warning $logpath | wc -l)"
@@ -2298,7 +2038,7 @@
__print_err "need one arg, <file-prefix>" $@
exit 1
fi
- echo -e $BOLD"Storing all container logs in $TESTLOGS/$ATC using prefix: "$1$EBOLD
+ echo -e $BOLD"Storing all docker/kube container logs and other test logs in $TESTLOGS/$ATC using prefix: "$1$EBOLD
docker stats --no-stream > $TESTLOGS/$ATC/$1_docker_stats.log 2>&1
@@ -2307,68 +2047,21 @@
cp .httplog_${ATC}.txt $TESTLOGS/$ATC/$1_httplog_${ATC}.txt 2>&1
if [ $RUNMODE == "DOCKER" ]; then
- __check_included_image 'CONSUL'
- if [ $? -eq 0 ]; then
- docker logs $CONSUL_APP_NAME > $TESTLOGS/$ATC/$1_consul.log 2>&1
- fi
- __check_included_image 'CBS'
- if [ $? -eq 0 ]; then
- docker logs $CBS_APP_NAME > $TESTLOGS/$ATC/$1_cbs.log 2>&1
- body="$(__do_curl $LOCALHOST_HTTP:$CBS_EXTERNAL_PORT/service_component_all/$POLICY_AGENT_APP_NAME)"
- echo "$body" > $TESTLOGS/$ATC/$1_consul_config.json 2>&1
- fi
-
- __check_included_image 'PA'
- if [ $? -eq 0 ]; then
- docker logs $POLICY_AGENT_APP_NAME > $TESTLOGS/$ATC/$1_policy-agent.log 2>&1
- fi
-
- __check_included_image 'ECS'
- if [ $? -eq 0 ]; then
- docker logs $ECS_APP_NAME > $TESTLOGS/$ATC/$1_ecs.log 2>&1
- fi
-
- __check_included_image 'CP'
- if [ $? -eq 0 ]; then
- docker logs $CONTROL_PANEL_APP_NAME > $TESTLOGS/$ATC/$1_control-panel.log 2>&1
- fi
-
- __check_included_image 'MR'
- if [ $? -eq 0 ]; then
- docker logs $MR_STUB_APP_NAME > $TESTLOGS/$ATC/$1_mr_stub.log 2>&1
- fi
-
- __check_included_image 'DMAAPSMR'
- if [ $? -eq 0 ]; then
- docker logs $MR_DMAAP_APP_NAME > $TESTLOGS/$ATC/$1_mr.log 2>&1
- docker logs $MR_KAFKA_APP_NAME > $TESTLOGS/$ATC/$1_mr_kafka.log 2>&1
- docker logs $MR_ZOOKEEPER_APP_NAME > $TESTLOGS/$ATC/$1_mr_zookeeper.log 2>&1
-
- fi
-
- __check_included_image 'CR'
- if [ $? -eq 0 ]; then
- docker logs $CR_APP_NAME > $TESTLOGS/$ATC/$1_cr.log 2>&1
- fi
-
- __check_included_image 'SDNC'
- if [ $? -eq 0 ]; then
- docker exec -t $SDNC_APP_NAME cat $SDNC_KARAF_LOG> $TESTLOGS/$ATC/$1_SDNC_karaf.log 2>&1
- fi
-
- __check_included_image 'RICSIM'
- if [ $? -eq 0 ]; then
- rics=$(docker ps -f "name=$RIC_SIM_PREFIX" --format "{{.Names}}")
- for ric in $rics; do
- docker logs $ric > $TESTLOGS/$ATC/$1_$ric.log 2>&1
- done
- fi
-
- __check_included_image 'PRODSTUB'
- if [ $? -eq 0 ]; then
- docker logs $PROD_STUB_APP_NAME > $TESTLOGS/$ATC/$1_prodstub.log 2>&1
- fi
+ # Store docker logs for all container
+ for imagename in $APP_SHORT_NAMES; do
+ __check_included_image $imagename
+ if [ $? -eq 0 ]; then
+ # A function name is created from the app short name
+ # for example app short name 'RICMSIM' -> produce the function
+ # name __RICSIM__store_docker_logs
+ # This function is called and is expected to exist in the imported
+ # file for the ricsim test functions
+ # The resulting function impl shall store the docker logs for each container
+ function_pointer="__"$imagename"_store_docker_logs"
+ $function_pointer "$TESTLOGS/$ATC/" $1
+ fi
+ done
fi
if [ $RUNMODE == "KUBE" ]; then
namespaces=$(kubectl get namespaces -o jsonpath='{.items[?(@.metadata.name)].metadata.name}')
@@ -2392,6 +2085,11 @@
__do_curl() {
echo ${FUNCNAME[1]} "line: "${BASH_LINENO[1]} >> $HTTPLOG
curlString="curl -skw %{http_code} $@"
+ if [ $RUNMODE == "KUBE" ]; then
+ if [ ! -z "$CLUSTER_KUBE_PROXY_NODEPORT" ]; then
+ curlString="curl -skw %{http_code} --proxy http://localhost:$CLUSTER_KUBE_PROXY_NODEPORT $@"
+ fi
+ fi
echo " CMD: $curlString" >> $HTTPLOG
res=$($curlString)
echo " RESP: $res" >> $HTTPLOG
diff --git a/test/common/testengine_config.sh b/test/common/testengine_config.sh
new file mode 100644
index 0000000..fd23617
--- /dev/null
+++ b/test/common/testengine_config.sh
@@ -0,0 +1,47 @@
+#!/bin/bash
+
+# ============LICENSE_START===============================================
+# Copyright (C) 2020 Nordix Foundation. All rights reserved.
+# ========================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END=================================================
+#
+
+# List of short names for all supported apps, including simulators etc
+APP_SHORT_NAMES="PA ECS SDNC CP NGW RC RICSIM HTTPPROXY CBS CONSUL DMAAPMR MR CR PRODSTUB KUBEPROXY"
+
+# List of available apps that built and released of the project
+PROJECT_IMAGES="PA ECS SDNC CP NGW RICSIM RC"
+
+# List of available apps to override with local or remote staging/snapshot/release image
+AVAILABLE_IMAGES_OVERRIDE="PA ECS SDNC CP NGW RICSIM RC"
+
+# List of available apps where the image is built by the test environment
+LOCAL_IMAGE_BUILD="MR CR PRODSTUB"
+
+
+#Integrate a new app into the test environment
+# 1 Choose a short name for the app
+# Note than an app might use more than one image
+# 2 Add the short name to APP_SHORT_NAMES
+# 3 If the image is built and released as part of the project,
+# add the short name to PROJECT_IMAGES
+# 4 If it possible to override with for example a local image,
+# add the short name to AVAILABLE_IMAGES_OVERRIDE
+# This is default...so normally images shall be possible to override
+# 5 If the image is built by the test script,
+# add the short name to LOCAL_IMAGE_BUILD
+# Summary:
+# All app short name shall exist in APP_SHORT_NAMES
+# Then the app short name be added to both PROJECT_IMAGES and AVAILABLE_IMAGES_OVERRIDE
+# or only to LOCAL_IMAGE_BUILD