Rashmi Pujar | 451a53a | 2019-12-10 17:10:55 -0500 | [diff] [blame] | 1 | .. This work is licensed under a |
| 2 | .. Creative Commons Attribution 4.0 International License. |
| 3 | .. http://creativecommons.org/licenses/by/4.0 |
| 4 | |
| 5 | |
| 6 | Policy Platform Development Tools |
| 7 | ################################# |
| 8 | |
| 9 | .. contents:: |
| 10 | :depth: 3 |
| 11 | |
| 12 | |
| 13 | This article explains how to build the ONAP Policy Framework for development purposes and how to run stability/performance tests for a variety of components. To start, the developer should consult the latest ONAP Wiki to familiarize themselves with developer best practices and how-tos to setup their environment, see `https://wiki.onap.org/display/DW/Developer+Best+Practices`. |
| 14 | |
| 15 | |
| 16 | This article assumes that: |
| 17 | |
| 18 | * You are using a *\*nix* operating system such as linux or macOS. |
| 19 | * You are using a directory called *git* off your home directory *(~/git)* for your git repositories |
| 20 | * Your local maven repository is in the location *~/.m2/repository* |
| 21 | * You have added settings to access the ONAP Nexus to your M2 configuration, see `Maven Settings Example <https://wiki.onap.org/display/DW/Setting+Up+Your+Development+Environment>`_ (bottom of the linked page) |
| 22 | |
| 23 | The procedure documented in this article has been verified to work on a MacBook laptop running macOS Yosemite Version 10.10,5, Sierra Version 10.12.6, a HP Z600 desktop running Ubuntu 16.04.3 LTS, and an Unbuntu 16.04 VM. |
| 24 | |
| 25 | Cloning All The Policy Repositories |
| 26 | *********************************** |
| 27 | |
| 28 | Run a script such as the script below to clone the required modules from the `ONAP git repository <https://gerrit.onap.org/r/#/admin/projects/?filter=policy>`_. This script clones all the ONAP Policy Framework repositories. |
| 29 | |
| 30 | ONAP Policy Framework has dependencies to the ONAP Parent *oparent* module, the ONAP ECOMP SDK *ecompsdkos* module, and the A&AI Schema module. |
| 31 | |
| 32 | |
| 33 | .. code-block:: bash |
| 34 | :caption: Typical ONAP Policy Framework Clone Script |
| 35 | :linenos: |
| 36 | |
| 37 | #!/usr/bin/env bash |
| 38 | |
| 39 | ## script name for output |
| 40 | MOD_SCRIPT_NAME=`basename $0` |
| 41 | |
| 42 | ## the ONAP clone directory, defaults to "onap" |
| 43 | clone_dir="onap" |
| 44 | |
| 45 | ## the ONAP repos to clone |
| 46 | onap_repos="\ |
| 47 | policy/parent \ |
| 48 | policy/common \ |
| 49 | policy/models \ |
| 50 | policy/docker \ |
| 51 | policy/api \ |
| 52 | policy/pap \ |
| 53 | policy/apex-pdp \ |
| 54 | policy/drools-pdp \ |
| 55 | policy/drools-applications \ |
| 56 | policy/xacml-pdp \ |
| 57 | policy/engine \ |
| 58 | policy/distribution" |
| 59 | |
| 60 | ## |
| 61 | ## Help screen and exit condition (i.e. too few arguments) |
| 62 | ## |
| 63 | Help() |
| 64 | { |
| 65 | echo "" |
| 66 | echo "$MOD_SCRIPT_NAME - clones all required ONAP git repositories" |
| 67 | echo "" |
| 68 | echo " Usage: $MOD_SCRIPT_NAME [-options]" |
| 69 | echo "" |
| 70 | echo " Options" |
| 71 | echo " -d - the ONAP clone directory, defaults to '.'" |
| 72 | echo " -h - this help screen" |
| 73 | echo "" |
| 74 | exit 255; |
| 75 | } |
| 76 | |
| 77 | ## |
| 78 | ## read command line |
| 79 | ## |
| 80 | while [ $# -gt 0 ] |
| 81 | do |
| 82 | case $1 in |
| 83 | #-d ONAP clone directory |
| 84 | -d) |
| 85 | shift |
| 86 | if [ -z "$1" ]; then |
| 87 | echo "$MOD_SCRIPT_NAME: no clone directory" |
| 88 | exit 1 |
| 89 | fi |
| 90 | clone_dir=$1 |
| 91 | shift |
| 92 | ;; |
| 93 | |
| 94 | #-h prints help and exists |
| 95 | -h) |
| 96 | Help;exit 0;; |
| 97 | |
| 98 | *) echo "$MOD_SCRIPT_NAME: undefined CLI option - $1"; exit 255;; |
| 99 | esac |
| 100 | done |
| 101 | |
| 102 | if [ -f "$clone_dir" ]; then |
| 103 | echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as file" |
| 104 | exit 2 |
| 105 | fi |
| 106 | if [ -d "$clone_dir" ]; then |
| 107 | echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as directory" |
| 108 | exit 2 |
| 109 | fi |
| 110 | |
| 111 | mkdir $clone_dir |
| 112 | if [ $? != 0 ] |
| 113 | then |
| 114 | echo cannot clone ONAP repositories, could not create directory '"'$clone_dir'"' |
| 115 | exit 3 |
| 116 | fi |
| 117 | |
| 118 | for repo in $onap_repos |
| 119 | do |
| 120 | repoDir=`dirname "$repo"` |
| 121 | repoName=`basename "$repo"` |
| 122 | |
| 123 | if [ ! -z $dirName ] |
| 124 | then |
| 125 | mkdir "$clone_dir/$repoDir" |
| 126 | if [ $? != 0 ] |
| 127 | then |
| 128 | echo cannot clone ONAP repositories, could not create directory '"'$clone_dir/repoDir'"' |
| 129 | exit 4 |
| 130 | fi |
| 131 | fi |
| 132 | |
| 133 | git clone https://gerrit.onap.org/r/${repo} $clone_dir/$repo |
| 134 | done |
| 135 | |
| 136 | echo ONAP has been cloned into '"'$clone_dir'"' |
| 137 | |
| 138 | |
| 139 | Execution of the script above results in the following directory hierarchy in your *~/git* directory: |
| 140 | |
| 141 | * ~/git/onap |
| 142 | * ~/git/onap/policy |
| 143 | * ~/git/onap/policy/parent |
| 144 | * ~/git/onap/policy/common |
| 145 | * ~/git/onap/policy/models |
| 146 | * ~/git/onap/policy/api |
| 147 | * ~/git/onap/policy/pap |
| 148 | * ~/git/onap/policy/docker |
| 149 | * ~/git/onap/policy/drools-applications |
| 150 | * ~/git/onap/policy/drools-pdp |
| 151 | * ~/git/onap/policy/engine |
| 152 | * ~/git/onap/policy/apex-pdp |
| 153 | * ~/git/onap/policy/xacml-pdp |
| 154 | * ~/git/onap/policy/distribution |
| 155 | |
| 156 | |
| 157 | Building ONAP Policy Framework Components |
| 158 | ***************************************** |
| 159 | |
| 160 | **Step 1:** Optionally, for a completely clean build, remove the ONAP built modules from your local repository. |
| 161 | |
| 162 | .. code-block:: bash |
| 163 | |
| 164 | rm -fr ~/.m2/repository/org/onap |
| 165 | |
| 166 | |
| 167 | **Step 2:** A pom such as the one below can be used to build the ONAP Policy Framework modules. Create the *pom.xml* file in the directory *~/git/onap/policy*. |
| 168 | |
| 169 | .. code-block:: xml |
| 170 | :caption: Typical pom.xml to build the ONAP Policy Framework |
| 171 | :linenos: |
| 172 | |
| 173 | <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> |
| 174 | <modelVersion>4.0.0</modelVersion> |
| 175 | <groupId>org.onap</groupId> |
| 176 | <artifactId>onap-policy</artifactId> |
| 177 | <version>1.0.0-SNAPSHOT</version> |
| 178 | <packaging>pom</packaging> |
| 179 | <name>${project.artifactId}</name> |
| 180 | <inceptionYear>2017</inceptionYear> |
| 181 | <organization> |
| 182 | <name>ONAP</name> |
| 183 | </organization> |
| 184 | |
| 185 | <modules> |
| 186 | <module>parent</module> |
| 187 | <module>common</module> |
| 188 | <module>models</module> |
| 189 | <module>api</module> |
| 190 | <module>pap</module> |
| 191 | <module>apex-pdp</module> |
| 192 | <module>xacml-pdp</module> |
| 193 | <module>drools-pdp</module> |
| 194 | <module>drools-applications</module> |
| 195 | <!-- The engine repo is being deprecated, |
| 196 | and can be ommitted if not working with |
| 197 | legacy api and components. --> |
| 198 | <module>engine</module> |
| 199 | <module>distribution</module> |
| 200 | </modules> |
| 201 | </project> |
| 202 | |
| 203 | **Policy Architecture/API Transition** |
| 204 | |
| 205 | In Dublin, a new Policy Architecture was introduced. The legacy architecture runs in parallel with the new architecture. It will be deprecated after Frankfurt release. |
| 206 | If the developer is only interested in working with the new architecture components, the engine sub-module can be ommitted. |
| 207 | |
| 208 | |
| 209 | **Step 3:** You can now build the Policy framework |
| 210 | |
| 211 | .. code-block:: bash |
| 212 | |
| 213 | cd ~/git/onap |
| 214 | mvn clean install |
| 215 | |
| 216 | |
| 217 | Running the Stability/Performance Tests |
| 218 | *************************************** |
| 219 | |
| 220 | Policy API component |
| 221 | ~~~~~~~~~~~~~~~~~~~~ |
| 222 | |
| 223 | 72 Hours Stability Test of Policy API |
| 224 | +++++++++++++++++++++++++++++++++++++ |
| 225 | |
| 226 | Introduction |
| 227 | ------------ |
| 228 | |
| 229 | The 72 hour stability test of policy API has the goal of verifying the stability of running policy design API REST service by |
| 230 | ingesting a steady flow of transactions of policy design API calls in a multi-thread fashion to simulate multiple clients' behaviors. |
| 231 | All the transaction flows are initiated from a test client server running JMeter for the duration of 72+ hours. |
| 232 | |
| 233 | Setup Details |
| 234 | ------------- |
| 235 | |
| 236 | The stability test is performed on VMs running in Intel Wind River Lab environment. |
| 237 | There are 2 seperate VMs. One for running API while the other running JMeter & other necessary components, e.g. MariaDB, to simulate steady flow of transactions. |
| 238 | For simplicity, let's assume: |
| 239 | |
| 240 | VM1 will be running JMeter, MariaDB. |
| 241 | VM2 will be running API REST service and visualVM. |
| 242 | |
| 243 | **Lab Environment** |
| 244 | |
| 245 | Intel ONAP Integration and Deployment Labs |
| 246 | `Physical Labs <https://wiki.onap.org/display/DW/Physical+Labs>`_, |
| 247 | `Wind River <https://www.windriver.com/>`_ |
| 248 | |
| 249 | **API VM Details (VM2)** |
| 250 | |
| 251 | OS: Ubuntu 18.04 LTS |
| 252 | |
| 253 | CPU: 4 core |
| 254 | |
| 255 | RAM: 8 GB |
| 256 | |
| 257 | HardDisk: 91 GB |
| 258 | |
| 259 | Docker Version: 18.09.8 |
| 260 | |
| 261 | Java: OpenJDK 1.8.0_212 |
| 262 | |
| 263 | **JMeter VM Details (VM1)** |
| 264 | |
| 265 | OS: Ubuntu 18.04 LTS |
| 266 | |
| 267 | CPU: 4 core |
| 268 | |
| 269 | RAM: 8GB |
| 270 | |
| 271 | HardDisk: 91GB |
| 272 | |
| 273 | Docker Version: 18.09.8 |
| 274 | |
| 275 | Java: OpenJDK 1.8.0_212 |
| 276 | |
| 277 | JMeter: 5.1.1 |
| 278 | |
| 279 | **Software Installation & Configuration** |
| 280 | |
| 281 | **VM1 & VM2 in lab** |
| 282 | |
| 283 | **Install Java & Docker** |
| 284 | |
| 285 | Make the etc/hosts entries |
| 286 | |
| 287 | .. code-block:: bash |
| 288 | |
| 289 | $ echo $(hostname -I | cut -d\ -f1) $(hostname) | sudo tee -a /etc/hosts |
| 290 | |
| 291 | Update the Ubuntu software installer |
| 292 | |
| 293 | .. code-block:: bash |
| 294 | |
| 295 | $ sudo apt-get update |
| 296 | |
| 297 | Check and install Java |
| 298 | |
| 299 | .. code-block:: bash |
| 300 | |
| 301 | $ sudo apt-get install -y openjdk-8-jdk |
| 302 | $ java -version |
| 303 | |
| 304 | Ensure that the Java version executing is OpenJDK version 8 |
| 305 | |
| 306 | Check and install docker |
| 307 | |
| 308 | .. code-block:: bash |
| 309 | |
| 310 | $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - |
| 311 | $ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" |
| 312 | $ sudo apt-get update |
| 313 | $ sudo apt-cache policy docker-ce |
| 314 | $ sudo apt-get install -y docker-ce |
| 315 | $ systemctl status docker |
| 316 | $ docker ps |
| 317 | |
| 318 | Change the permissions of the Docker socket file |
| 319 | |
| 320 | .. code-block:: bash |
| 321 | |
| 322 | $ sudo chmod 777 /var/run/docker.sock |
| 323 | |
| 324 | Check the status of the Docker service and ensure it is running correctly |
| 325 | |
| 326 | .. code-block:: bash |
| 327 | |
| 328 | $ service docker status |
| 329 | $ docker ps |
| 330 | |
| 331 | **VM1 in lab** |
| 332 | |
| 333 | **Install JMeter** |
| 334 | |
| 335 | Download & install JMeter |
| 336 | |
| 337 | .. code-block:: bash |
| 338 | |
| 339 | $ mkdir jMeter |
| 340 | $ cd jMeter |
| 341 | $ wget http://mirrors.whoishostingthis.com/apache//jmeter/binaries/apache-jmeter-5.1.1.zip |
| 342 | $ unzip apache-jmeter-5.1.1.zip |
| 343 | |
| 344 | **Install other necessary components** |
| 345 | |
| 346 | Pull api code & run setup components script |
| 347 | |
| 348 | .. code-block:: bash |
| 349 | |
| 350 | $ cd ~ |
| 351 | $ git clone https://git.onap.org/policy/api |
| 352 | $ cd api/testsuites/stability/src/main/resources/simulatorsetup |
| 353 | $ ./setup_components.sh |
| 354 | |
| 355 | After installation, make sure the following mariadb container is up and running |
| 356 | |
| 357 | .. code-block:: bash |
| 358 | |
| 359 | ubuntu@test:~/api/testsuites/stability/src/main/resources/simulatorsetup$ docker ps |
| 360 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES |
| 361 | 3849ce44b86d mariadb:10.2.14 "docker-entrypoint.s…" 11 days ago Up 11 days 0.0.0.0:3306->3306/tcp mariadb |
| 362 | |
| 363 | **VM2 in lab** |
| 364 | |
| 365 | **Install policy-api** |
| 366 | |
| 367 | Pull api code & run setup api script |
| 368 | |
| 369 | .. code-block:: bash |
| 370 | |
| 371 | $ cd ~ |
| 372 | $ git clone https://git.onap.org/policy/api |
| 373 | $ cd api/testsuites/stability/src/main/resources/apisetup |
| 374 | $ ./setup_api.sh <host ip running api> <host ip running mariadb> |
| 375 | |
| 376 | After installation, make sure the following api container is up and running |
| 377 | |
| 378 | .. code-block:: bash |
| 379 | |
| 380 | ubuntu@tools-2:~/api/testsuites/stability/src/main/resources/apisetup$ docker ps |
| 381 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES |
| 382 | 4f08f9972e55 nexus3.onap.org:10001/onap/policy-api:2.1.1-SNAPSHOT "bash ./policy-api.sh" 11 days ago Up 11 days 0.0.0.0:6969->6969/tcp, 0.0.0.0:9090->9090/tcp policy-api |
| 383 | |
| 384 | **Install & configure visualVM** |
| 385 | |
| 386 | VisualVM needs to be installed in the virtual machine having API up and running. It will be used to monitor CPU, Memory, GC for API while stability test is running. |
| 387 | |
| 388 | Install visualVM |
| 389 | |
| 390 | .. code-block:: bash |
| 391 | |
| 392 | $ sudo apt-get install visualvm |
| 393 | |
| 394 | Run few commands to configure permissions |
| 395 | |
| 396 | .. code-block:: bash |
| 397 | |
| 398 | $ cd /usr/lib/jvm/java-8-openjdk-amd64/bin/ |
| 399 | $ sudo touch visualvm.policy |
| 400 | $ sudo chmod 777 visualvm.policy |
| 401 | |
| 402 | $ vi visualvm.policy |
| 403 | |
| 404 | Add the following in visualvm.policy |
| 405 | |
| 406 | |
| 407 | grant codebase "file:/usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar" { |
| 408 | permission java.security.AllPermission; |
| 409 | }; |
| 410 | |
| 411 | Run following commands to start jstatd using port 1111 |
| 412 | |
| 413 | .. code-block:: bash |
| 414 | |
| 415 | $ cd /usr/lib/jvm/java-8-openjdk-amd64/bin/ |
| 416 | $ ./jstatd -p 1111 -J-Djava.security.policy=visualvm.policy & |
| 417 | |
| 418 | **Local Machine** |
| 419 | |
| 420 | **Run & configure visualVM** |
| 421 | |
| 422 | Run visualVM by typing |
| 423 | |
| 424 | .. code-block:: bash |
| 425 | |
| 426 | $ jvisualvm |
| 427 | |
| 428 | Connect to jstatd & remote policy-api JVM |
| 429 | |
| 430 | 1. Right click on "Remote" in the left panel of the screen and select "Add Remote Host..." |
| 431 | 2. Enter the IP address of VM2 (running policy-api) |
| 432 | 3. Right click on IP address, select "Add JMX Connection..." |
| 433 | 4. Enter the VM2 IP Address (from step 2) <IP address>:9090 ( for example, 10.12.6.151:9090) and click OK. |
| 434 | 5. Double click on the newly added nodes under "Remote" to start monitoring CPU, Memory & GC. |
| 435 | |
| 436 | Sample Screenshot of visualVM |
| 437 | |
| 438 | .. image:: images/results-5.png |
| 439 | |
| 440 | Test Plan |
| 441 | --------- |
| 442 | |
| 443 | The 72+ hours stability test will be running the following steps sequentially in multi-threaded loops. |
| 444 | Thread number is set to 5 to simulate 5 API clients' behaviors (they can be calling the same policy CRUD API simultaneously). |
| 445 | |
| 446 | **Setup Thread (will be running only once)** |
| 447 | |
| 448 | - Get policy-api Healthcheck |
| 449 | - Get API Counter Statistics |
| 450 | - Get Preloaded Policy Types |
| 451 | |
| 452 | **API Test Flow (5 threads running the same steps in the same loop)** |
| 453 | |
| 454 | - Create a new TCA Policy Type with Version 1.0.0 |
| 455 | - Create a new TCA Policy Type with Version 2.0.0 |
| 456 | - Create a new TCA Policy Type with Version 3.0.0 |
| 457 | - Create a new TCA Policy Type with Version 4.0.0 |
| 458 | - Create a new TCA Policy Type with Version 5.0.0 |
| 459 | - Create a new TCA Policy Type with Version 6.0.0 |
| 460 | - Create a new TCA Policy Type with Version 7.0.0 |
| 461 | - Create a new TCA Policy Type with Version 8.0.0 |
| 462 | - Create a new TCA Policy Type with Version 9.0.0 |
| 463 | - Create a new TCA Policy Type with Version 10.0.0 |
| 464 | - Create a new TCA Policy Type with Version 11.0.0 |
| 465 | - A 10 sec timer |
| 466 | - Get All Existing Policy Types |
| 467 | - Get All Existing Versions of the New TCA Policy Type |
| 468 | - Get Version 1.0.0 of the New TCA Policy Type |
| 469 | - Get Version 2.0.0 of the New TCA Policy Type |
| 470 | - Get Version 3.0.0 of the New TCA Policy Type |
| 471 | - Get Version 4.0.0 of the New TCA Policy Type |
| 472 | - Get Version 5.0.0 of the New TCA Policy Type |
| 473 | - Get Version 6.0.0 of the New TCA Policy Type |
| 474 | - Get Version 7.0.0 of the New TCA Policy Type |
| 475 | - Get Version 8.0.0 of the New TCA Policy Type |
| 476 | - Get Version 9.0.0 of the New TCA Policy Type |
| 477 | - Get Version 10.0.0 of the New TCA Policy Type |
| 478 | - Get Version 11.0.0 of the New TCA Policy Type |
| 479 | - Get the Latest Version of the New TCA Policy Type |
| 480 | - A 10 sec timer |
| 481 | - Create a New TCA Policy with Version 1.0.0 over the New TCA Policy Type Version 2.0.0 |
| 482 | - Create a New TCA Policy with Version 2.0.0 over the New TCA Policy Type Version 2.0.0 |
| 483 | - Create a New TCA Policy with Version 3.0.0 over the New TCA Policy Type Version 2.0.0 |
| 484 | - Create a New TCA Policy with Version 4.0.0 over the New TCA Policy Type Version 2.0.0 |
| 485 | - Create a New TCA Policy with Version 5.0.0 over the New TCA Policy Type Version 2.0.0 |
| 486 | - Create a New TCA Policy with Version 6.0.0 over the New TCA Policy Type Version 2.0.0 |
| 487 | - Create a New TCA Policy with Version 7.0.0 over the New TCA Policy Type Version 2.0.0 |
| 488 | - Create a New TCA Policy with Version 8.0.0 over the New TCA Policy Type Version 2.0.0 |
| 489 | - Create a New TCA Policy with Version 9.0.0 over the New TCA Policy Type Version 2.0.0 |
| 490 | - Create a New TCA Policy with Version 10.0.0 over the New TCA Policy Type Version 2.0.0 |
| 491 | - Create a New TCA Policy with Version 11.0.0 over the New TCA Policy Type Version 2.0.0 |
| 492 | - A 10 sec Timer |
| 493 | - Get All Existing TCA Policies |
| 494 | - Get All Existing Versions of TCA Policies |
| 495 | - Get Version 1.0.0 of the New TCA Policy |
| 496 | - Get Version 2.0.0 of the New TCA Policy |
| 497 | - Get Version 3.0.0 of the New TCA Policy |
| 498 | - Get Version 4.0.0 of the New TCA Policy |
| 499 | - Get Version 5.0.0 of the New TCA Policy |
| 500 | - Get Version 6.0.0 of the New TCA Policy |
| 501 | - Get Version 7.0.0 of the New TCA Policy |
| 502 | - Get Version 8.0.0 of the New TCA Policy |
| 503 | - Get Version 9.0.0 of the New TCA Policy |
| 504 | - Get Version 10.0.0 of the New TCA Policy |
| 505 | - Get Version 11.0.0 of the New TCA Policy |
| 506 | - Get the Latest Version of the New TCA Policy |
| 507 | - A 10 sec Timer |
| 508 | - Create a New Guard Policy with Version 1 |
| 509 | - Create a New Guard Policy with Version 5 |
| 510 | - Create a New Guard Policy with Version 9 |
| 511 | - Create a New Guard Policy with Version 12 |
| 512 | - A 10 sec Timer |
| 513 | - Get Version 1 of the New Guard Policy |
| 514 | - Get Version 5 of the New Guard Policy |
| 515 | - Get Version 9 of the New Guard Policy |
| 516 | - Get Version 12 of the New Guard Policy |
| 517 | - Get the Latest Version of the New Guard Policy |
| 518 | - A 10 sec Timer |
| 519 | |
| 520 | **TearDown Thread (will only be running after API Test Flow is completed)** |
| 521 | |
| 522 | - Delete Version 2.0.0 of the New TCA Policy Type (suppose to return 409-Conflict) |
| 523 | - Delete Version 3.0.0 of the New TCA Policy Type |
| 524 | - Delete Version 4.0.0 of the New TCA Policy Type |
| 525 | - Delete Version 5.0.0 of the New TCA Policy Type |
| 526 | - Delete Version 6.0.0 of the New TCA Policy Type |
| 527 | - Delete Version 7.0.0 of the New TCA Policy Type |
| 528 | - Delete Version 8.0.0 of the New TCA Policy Type |
| 529 | - Delete Version 9.0.0 of the New TCA Policy Type |
| 530 | - Delete Version 10.0.0 of the New TCA Policy Type |
| 531 | - Delete Version 11.0.0 of the New TCA Policy Type |
| 532 | - Delete Version 1.0.0 of the New TCA Policy |
| 533 | - Delete Version 2.0.0 of the New TCA Policy |
| 534 | - Delete Version 3.0.0 of the New TCA Policy |
| 535 | - Delete Version 4.0.0 of the New TCA Policy |
| 536 | - Delete Version 5.0.0 of the New TCA Policy |
| 537 | - Delete Version 6.0.0 of the New TCA Policy |
| 538 | - Delete Version 7.0.0 of the New TCA Policy |
| 539 | - Delete Version 8.0.0 of the New TCA Policy |
| 540 | - Delete Version 9.0.0 of the New TCA Policy |
| 541 | - Delete Version 10.0.0 of the New TCA Policy |
| 542 | - Delete Version 11.0.0 of the New TCA Policy |
| 543 | - Re-Delete Version 2.0.0 of the New TCA Policy Type (will return 200 now since all TCA policies created over have been deleted) |
| 544 | - Delete Version 1 of the new Guard Policy |
| 545 | - Delete Version 5 of the new Guard Policy |
| 546 | - Delete Version 9 of the new Guard Policy |
| 547 | - Delete Version 12 of the new Guard Policy |
| 548 | |
| 549 | Run Test |
| 550 | -------- |
| 551 | |
| 552 | **Local Machine** |
| 553 | |
| 554 | Connect to lab VPN |
| 555 | |
| 556 | .. code-block:: bash |
| 557 | |
| 558 | $ sudo openvpn --config <path to lab ovpn key file> |
| 559 | |
| 560 | SSH into JMeter VM (VM1) |
| 561 | |
| 562 | .. code-block:: bash |
| 563 | |
| 564 | $ ssh -i <path to lab ssh key file> ubuntu@<host ip of JMeter VM> |
| 565 | |
| 566 | Run JMeter test in background for 72+ hours |
| 567 | |
| 568 | .. code-block:: bash |
| 569 | |
| 570 | $ mkdir s3p |
| 571 | $ nohup ./jMeter/apache-jmeter-5.1.1/bin/jmeter.sh -n -t ~/api/testsuites/stability/src/main/resources/testplans/policy_api_stability.jmx & |
| 572 | |
| 573 | (Optional) Monitor JMeter test that is running in background (anytime after re-logging into JMeter VM - VM1) |
| 574 | |
| 575 | .. code-block:: bash |
| 576 | |
| 577 | $ tail -f s3p/stability.log nohup.out |
| 578 | |
| 579 | |
| 580 | Test Results |
| 581 | ------------ |
| 582 | |
| 583 | **Summary** |
| 584 | |
| 585 | Policy API stability test plan was triggered and running for 72+ hours without any error occurred. |
| 586 | |
| 587 | **Test Statistics** |
| 588 | |
| 589 | ======================= ============= =========== =============================== =============================== =============================== |
| 590 | **Total # of requests** **Success %** **Error %** **Avg. time taken per request** **Min. time taken per request** **Max. time taken per request** |
| 591 | ======================= ============= =========== =============================== =============================== =============================== |
| 592 | 49723 100% 0% 86 ms 4 ms 795 ms |
| 593 | ======================= ============= =========== =============================== =============================== =============================== |
| 594 | |
| 595 | **VisualVM Results** |
| 596 | |
| 597 | .. image:: images/results-5.png |
| 598 | .. image:: images/results-6.png |
| 599 | |
| 600 | **JMeter Results** |
| 601 | |
| 602 | .. image:: images/results-1.png |
| 603 | .. image:: images/results-2.png |
| 604 | .. image:: images/results-3.png |
| 605 | .. image:: images/results-4.png |
| 606 | |
| 607 | |
| 608 | |
| 609 | Performance Test of Policy API |
| 610 | ++++++++++++++++++++++++++++++ |
| 611 | |
| 612 | Introduction |
| 613 | ------------ |
| 614 | |
| 615 | Performance test of policy-api has the goal of testing the min/avg/max processing time and rest call throughput for all the requests when the number of requests are large enough to saturate the resource and find the bottleneck. |
| 616 | |
| 617 | Setup Details |
| 618 | ------------- |
| 619 | |
| 620 | The performance test is performed on OOM-based deployment of ONAP Policy framework components in Intel Wind River Lab environment. |
| 621 | In addition, we use another VM with JMeter installed to generate the transactions. |
| 622 | The JMeter VM will be sending large number of REST requests to the policy-api component and collecting the statistics. |
| 623 | Policy-api component already knows how to communicate with MariaDB component if OOM-based deployment is working correctly. |
| 624 | |
| 625 | Test Plan |
| 626 | --------- |
| 627 | |
| 628 | Performance test plan is the same as stability test plan above. |
| 629 | Only differences are, in performance test, we increase the number of threads up to 20 (simulating 20 users' behaviors at the same time) whereas reducing the test time down to 1 hour. |
| 630 | |
| 631 | Run Test |
| 632 | -------- |
| 633 | |
| 634 | Running/Triggering performance test will be the same as stability test. That is, launch JMeter pointing to corresponding *.jmx* test plan. The *API_HOST* and *API_PORT* are already set up in *.jmx*. |
| 635 | |
| 636 | Test Results |
| 637 | ------------ |
| 638 | |
| 639 | Test results are shown as below. Overall, the test was running smoothly and successfully. We do see some minor failed transactions, especially in POST calls which intend to write into DB simultaneously in a multi-threaded fashion . All GET calls (reading from DB) were succeeded. |
| 640 | |
| 641 | .. image:: images/summary-1.png |
| 642 | .. image:: images/summary-2.png |
| 643 | .. image:: images/summary-3.png |
| 644 | .. image:: images/result-1.png |
| 645 | .. image:: images/result-2.png |
| 646 | .. image:: images/result-3.png |
| 647 | .. image:: images/result-4.png |
| 648 | .. image:: images/result-5.png |
| 649 | .. image:: images/result-6.png |
| 650 | |
| 651 | |
| 652 | Policy PAP component |
| 653 | ~~~~~~~~~~~~~~~~~~~~ |
| 654 | |
| 655 | 72 Hours Stability Test of PAP |
| 656 | ++++++++++++++++++++++++++++++ |
| 657 | |
| 658 | Introduction |
| 659 | ------------ |
| 660 | |
| 661 | The 72 hour Stability Test for PAP has the goal of introducing a steady flow of transactions initiated from a test client server running JMeter for the duration of 72 hours. |
| 662 | |
| 663 | Setup details |
| 664 | ------------- |
| 665 | |
| 666 | The stability test is performed on VM's running in OpenStack cloud environment. |
| 667 | |
| 668 | There are 2 seperate VM's, one for running PAP & other one for running JMeter to simulate steady flow of transactions. |
| 669 | |
| 670 | All the dependencies like mariadb, dmaap simulator, pdp simulator & policy/api component are installed in the VM having JMeter. |
| 671 | |
| 672 | For simplicity lets assume |
| 673 | |
| 674 | VM1 will be running JMeter, MariaDB, DMaaP simulator, PDP simulator & API component. |
| 675 | |
| 676 | VM2 will be running only PAP component. |
| 677 | |
| 678 | **OpenStack environment details** |
| 679 | |
| 680 | Version: Mitaka |
| 681 | |
| 682 | **PAP VM details (VM2)** |
| 683 | |
| 684 | OS:Ubuntu 16.04 LTS |
| 685 | |
| 686 | CPU: 4 core |
| 687 | |
| 688 | RAM: 4 GB |
| 689 | |
| 690 | HardDisk: 40 GB |
| 691 | |
| 692 | Docker Version: 18.09.6 |
| 693 | |
| 694 | Java: openjdk version "1.8.0_212" |
| 695 | |
| 696 | **JMeter VM details (VM1)** |
| 697 | |
| 698 | OS: Ubuntu 16.04 LTS |
| 699 | |
| 700 | CPU: 4 core |
| 701 | |
| 702 | RAM: 4 GB |
| 703 | |
| 704 | HardDisk: 40 GB |
| 705 | |
| 706 | Docker Version: 18.09.6 |
| 707 | |
| 708 | Java: openjdk version "1.8.0_212" |
| 709 | |
| 710 | JMeter: 5.1.1 |
| 711 | |
| 712 | Install Docker in VM1 & VM2 |
| 713 | --------------------------- |
| 714 | |
| 715 | Make sure to execute below commands in VM1 & VM2 both. |
| 716 | |
| 717 | Make the etc/hosts entries |
| 718 | |
| 719 | .. code-block:: bash |
| 720 | |
| 721 | $ echo $(hostname -I | cut -d\ -f1) $(hostname) | sudo tee -a /etc/hosts |
| 722 | |
| 723 | Make the DNS entries |
| 724 | |
| 725 | .. code-block:: bash |
| 726 | |
| 727 | $ echo "nameserver <PrimaryDNSIPIP>" >> /etc/resolvconf/resolv.conf.d/head |
| 728 | $ echo "nameserver <SecondaryDNSIP>" >> /etc/resolvconf/resolv.conf.d/head |
| 729 | $ resolvconf -u |
| 730 | |
| 731 | Update the ubuntu software installer |
| 732 | |
| 733 | .. code-block:: bash |
| 734 | |
| 735 | $ apt-get update |
| 736 | |
| 737 | Check and Install Java |
| 738 | |
| 739 | .. code-block:: bash |
| 740 | |
| 741 | $ apt-get install -y openjdk-8-jdk |
| 742 | $ java -version |
| 743 | |
| 744 | Ensure that the Java version that is executing is OpenJDK version 8 |
| 745 | |
| 746 | |
| 747 | Check and install docker |
| 748 | |
| 749 | .. code-block:: bash |
| 750 | |
| 751 | $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - |
| 752 | $ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" |
| 753 | $ apt-get update |
| 754 | $ apt-cache policy docker-ce |
| 755 | $ apt-get install -y docker-ce |
| 756 | $ systemctl status docker |
| 757 | $ docker ps |
| 758 | |
| 759 | Change the permissions of the Docker socket file |
| 760 | |
| 761 | .. code-block:: bash |
| 762 | |
| 763 | $ chmod 777 /var/run/docker.sock |
| 764 | |
| 765 | Check the status of the Docker service and ensure it is running correctly |
| 766 | |
| 767 | .. code-block:: bash |
| 768 | |
| 769 | $ service docker status |
| 770 | $ docker ps |
| 771 | |
| 772 | Install JMeter in VM1 |
| 773 | --------------------- |
| 774 | |
| 775 | Download & install JMeter |
| 776 | |
| 777 | .. code-block:: bash |
| 778 | |
| 779 | $ mkdir jMeter |
| 780 | $ cd jMeter |
| 781 | $ wget http://mirrors.whoishostingthis.com/apache//jmeter/binaries/apache-jmeter-5.1.1.zip |
| 782 | $ unzip apache-jmeter-5.1.1.zip |
| 783 | |
| 784 | Run JMeter |
| 785 | |
| 786 | .. code-block:: bash |
| 787 | |
| 788 | $ /home/ubuntu/jMeter/apache-jmeter-5.1.1/bin/jmeter |
| 789 | |
| 790 | The above command will load the JMeter UI. Then navigate to File → Open → Browse and select the test plan jmx file to open. |
| 791 | The jmx file is present in the policy/pap git repository. |
| 792 | |
| 793 | Install simulators in VM1 |
| 794 | ------------------------- |
| 795 | |
| 796 | For installing simulator, there is a script placed at `install simulator script <https://gerrit.onap.org/r/gitweb?p=policy/pap.git;a=blob;f=testsuites/stability/src/main/resources/simulatorsetup/setup_components.sh;h=86de3c1efcb468431a2395eef610db209a613fc3;hb=refs/heads/master>`_ |
| 797 | |
| 798 | Copy the script & all related files in virtual machine and run it. |
| 799 | |
| 800 | After installation make sure that following 4 docker containers are up and running. |
| 801 | |
| 802 | .. code-block:: bash |
| 803 | |
| 804 | root@policytest-policytest-3-p5djn6as2477:/home/ubuntu/simulator# docker ps |
| 805 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES |
| 806 | 887efa8dac12 nexus3.onap.org:10001/onap/policy-api "bash ./policy-api.sh" 6 days ago Up 6 days 0.0.0.0:6969->6969/tcp policy-api |
| 807 | 0a931c0a63ac pdp/simulator:latest "bash pdp-sim.sh" 6 days ago Up 6 days pdp-simulator |
| 808 | a41adcb32afb dmaap/simulator:latest "bash dmaap-sim.sh" 6 days ago Up 6 days 0.0.0.0:3904->3904/tcp dmaap-simulator |
| 809 | d52d6b750ba0 mariadb:10.2.14 "docker-entrypoint.s…" 6 days ago Up 6 days 0.0.0.0:3306->3306/tcp mariadb |
| 810 | |
| 811 | Install PAP in VM2 |
| 812 | ------------------ |
| 813 | |
| 814 | For installing PAP, there is a script placed at `install pap script <https://gerrit.onap.org/r/gitweb?p=policy/pap.git;a=blob;f=testsuites/stability/src/main/resources/papsetup/setup_pap.sh;h=dc5e69e76da9f48f6b23cc012e14148f1373d1e1;hb=refs/heads/master>`_ |
| 815 | |
| 816 | Copy the script & all related files in virtual machine and run it. |
| 817 | |
| 818 | After installation make sure that following docker container is up and running. |
| 819 | |
| 820 | .. code-block:: bash |
| 821 | |
| 822 | root@policytest-policytest-0-uc3y2h5x6p4j:/home/ubuntu/pap# docker ps |
| 823 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES |
| 824 | 42ac0ed4b713 nexus3.onap.org:10001/onap/policy-pap:2.0.0-SNAPSHOT-latest "bash ./policy-pap.sh" 3 days ago Up 3 days 0.0.0.0:6969->6969/tcp, 0.0.0.0:9090->9090/tcp policy-pap |
| 825 | |
| 826 | Install & configure visualVM in VM2 |
| 827 | ----------------------------------- |
| 828 | |
| 829 | visualVM needs to be installed in the virtual machine having PAP. It will be used to monitor CPU, Memory, GC for PAP while stability test is running. |
| 830 | |
| 831 | Install visualVM |
| 832 | |
| 833 | .. code-block:: bash |
| 834 | |
| 835 | $ sudo apt-get install visualvm |
| 836 | |
| 837 | Run few commands to configure permissions |
| 838 | |
| 839 | .. code-block:: bash |
| 840 | |
| 841 | $ cd /usr/lib/jvm/java-8-openjdk-amd64/bin/ |
| 842 | $ sudo touch visualvm.policy |
| 843 | $ sudo chmod 777 visualvm.policy |
| 844 | |
| 845 | $ vi visualvm.policy |
| 846 | |
| 847 | Add the following in visualvm.policy |
| 848 | |
| 849 | |
| 850 | grant codebase "file:/usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar" { |
| 851 | permission java.security.AllPermission; |
| 852 | }; |
| 853 | |
| 854 | Run following commands to start jstatd using port 1111 |
| 855 | |
| 856 | .. code-block:: bash |
| 857 | |
| 858 | $ cd /usr/lib/jvm/java-8-openjdk-amd64/bin/ |
| 859 | $ ./jstatd -p 1111 -J-Djava.security.policy=visualvm.policy & |
| 860 | |
| 861 | Run visualVM locally to connect to remote VM2 |
| 862 | |
| 863 | .. code-block:: bash |
| 864 | |
| 865 | # On your windows machine or your linux box locally, launch visualVM |
| 866 | |
| 867 | Connect to jstatd & remote apex-pdp JVM |
| 868 | |
| 869 | 1. Right click on "Remote" in the left panel of the screen and select "Add Remote Host..." |
| 870 | 2. Enter the IP address of VM2. |
| 871 | 3. Right click on IP address, select "Add JMX Connection..." |
| 872 | 4. Enter the VM2 IP Address (from step 2) <IP address>:9090 ( for example -10.12.6.201:9090) and click OK. |
| 873 | 5. Double click on the newly added nodes under "Remote" to start monitoring CPU, Memory & GC. |
| 874 | |
| 875 | Sample Screenshot of visualVM |
| 876 | |
| 877 | .. image:: images/pap-s3p-vvm-sample.png |
| 878 | |
| 879 | Test Plan |
| 880 | --------- |
| 881 | |
| 882 | The 72 hours stability test will run the following steps sequentially in a single threaded loop. |
| 883 | |
| 884 | - **Create Policy Type** - creates an operational policy type using policy/api component |
| 885 | - **Create Policy** - creates an operational policy using the policy type create in above step using policy/api component |
| 886 | - **Check Health** - checks the health status of pap |
| 887 | - **Check Statistics** - checks the statistics of pap |
| 888 | - **Change state to ACTIVE** - changes the state of PdpGroup to ACTIVE |
| 889 | - **Check PdpGroup Query** - makes a PdpGroup query request and verify that PdpGroup is in ACTIVE state. |
| 890 | - **Deploy Policy** - deploys the policy in PdpGroup |
| 891 | - **Undeploy Policy** - undeploy the policy from PdpGroup |
| 892 | - **Change state to PASSIVE** - changes the state of PdpGroup to PASSIVE |
| 893 | - **Check PdpGroup Query** - makes a PdpGroup query request and verify that PdpGroup is in PASSIVE state. |
| 894 | - **Delete Policy** - deletes the operational policy using policy/api component |
| 895 | - **Delete Policy Type** - deletes the operational policy type using policy/api component |
| 896 | |
| 897 | The following steps can be used to configure the parameters of test plan. |
| 898 | |
| 899 | - **HTTP Authorization Manager** - used to store user/password authentication details. |
| 900 | - **HTTP Header Manager** - used to store headers which will be used for making HTTP requests. |
| 901 | - **User Defined Variables** - used to store following user defined parameters. |
| 902 | |
| 903 | ========== =============================================== |
| 904 | **Name** **Description** |
| 905 | ========== =============================================== |
| 906 | PAP_HOST IP Address or host name of PAP component |
| 907 | PAP_PORT Port number of PAP for making REST API calls |
| 908 | API_HOST IP Address or host name of API component |
| 909 | API_PORT Port number of API for making REST API calls |
| 910 | ========== =============================================== |
| 911 | |
| 912 | Screenshot of PAP stability test plan |
| 913 | |
| 914 | .. image:: images/pap-s3p-testplan.png |
| 915 | |
| 916 | Test Results |
| 917 | ------------ |
| 918 | |
| 919 | **Summary** |
| 920 | |
| 921 | Stability test plan was triggered for 72 hours. |
| 922 | |
| 923 | **Test Statistics** |
| 924 | |
| 925 | ======================= ================= ================== ================================== |
| 926 | **Total # of requests** **Success %** **Error %** **Average time taken per request** |
| 927 | ======================= ================= ================== ================================== |
| 928 | 178208 100 % 0 % 76 ms |
| 929 | ======================= ================= ================== ================================== |
| 930 | |
| 931 | **VisualVM Screenshot** |
| 932 | |
| 933 | .. image:: images/pap-s3p-vvm-1.png |
| 934 | .. image:: images/pap-s3p-vvm-2.png |
| 935 | |
| 936 | **JMeter Screenshot** |
| 937 | |
| 938 | .. image:: images/pap-s3p-jm-1.png |
| 939 | .. image:: images/pap-s3p-jm-1.png |
| 940 | |
| 941 | |
| 942 | Policy XACML PDP component |
| 943 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 944 | |
| 945 | Performance Test of Policy XACML PDP |
| 946 | ++++++++++++++++++++++++++++++++++++ |
| 947 | |
| 948 | Summary |
| 949 | ------- |
| 950 | |
| 951 | The Performance test was executed by performing requests against the Policy RESTful APIs residing on the XACML PDP installed in the windriver lab to get policy decisions for monitoring and guard policy types. This was running on a kubernetes host having the following configuration: |
| 952 | |
| 953 | - 16GB RAM |
| 954 | - 8 VCPU |
| 955 | - 160GB Disk |
| 956 | |
| 957 | The performance test runs 10 simultaneous threads calling XACML PDP RESTful APIs to get decisions for Monitoring, Guard Min Max, and Guard Frequency Limiter policy types, with at duration of 6000 seconds. The test execution lasted approximately 50 minutes resulting in the following summary: |
| 958 | |
| 959 | - 37,305 Healthcheck requests |
| 960 | - 33,716 Statistics requests |
| 961 | - 25,294 Monitoring decision requests |
| 962 | - 25,288 Guard Min Max decisions |
| 963 | - 25,286 Guard Frequency Limiter requests |
| 964 | |
| 965 | The average throughput was about 9.8 transactions per second. CPU and memory usage along with a screenshot of the JMeter Summary Report are provided in this document. |
| 966 | |
| 967 | Results |
| 968 | ------- |
| 969 | |
| 970 | **CPU Utilization** |
| 971 | |
| 972 | Total CPU used by the PDP was measured before and after the test, using "ps -l". |
| 973 | |
| 974 | =================== ================== ================ =================== =============== ================== |
| 975 | **Intial CPU time** **Final CPU time** **Intial CPU %** **Intial Memory %** **Final CPU %** **Final Memory %** |
| 976 | =================== ================== ================ =================== =============== ================== |
| 977 | 00:60:27 00:73:45 3.5% 4.0% 94.12.3% 4.0% |
| 978 | =================== ================== ================ =================== =============== ================== |
| 979 | |
| 980 | **Memory Utilization** |
| 981 | |
| 982 | .. code-block:: bash |
| 983 | |
| 984 | Number of young garbage collections used during the test: 518 |
| 985 | Avg. Young garbage collection time: ~11.56ms per collection |
| 986 | Total number of Full garbage collection: 32 |
| 987 | Avg. Full garbage collection time: ~315.06ms per collection |
| 988 | |
| 989 | |
| 990 | S0C S1C S0U S1U EC EU OC OU MC MU CCSC CCSU YGC YGCT FGC FGCT GCT |
| 991 | |
| 992 | 16768.0 16768.0 0.0 5461.0 134144.0 71223.6 334692.0 138734.5 50008.0 48955.8 5760.0 5434.3 4043 45.793 32 10.082 55.875 |
| 993 | |
| 994 | 16768.0 16768.0 0.0 4993.4 134144.0 66115.7 334692.0 252887.4 50264.0 49036.5 5760.0 5439.7 4561 53.686 32 10.082 63.768 |
| 995 | |
| 996 | **Jmeter Results Summary** |
| 997 | |
| 998 | .. image:: images/xacml-s3p.PNG |
| 999 | |
| 1000 | |
| 1001 | Policy Drools PDP component |
| 1002 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 1003 | |
| 1004 | |
| 1005 | Policy APEX PDP component |
| 1006 | ~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 1007 | |
| 1008 | Setting up Stability Tests in APEX |
| 1009 | ++++++++++++++++++++++++++++++++++ |
| 1010 | |
| 1011 | Introduction |
| 1012 | ------------ |
| 1013 | |
| 1014 | The 72 hour Stability Test for apex-pdp has the goal of introducing a steady flow of transactions initiated from a test client server running JMeter. The pdp is configured to start a rest server inside it and take input from rest clients (JMeter) and send back output to the rest clients (JMeter). |
| 1015 | |
| 1016 | The input events will be submitted through rest interface of apex-pdp and the results are verified using the rest responses coming out from apex-pdp. |
| 1017 | |
| 1018 | The test will be performed in a multi-threaded environment where 20 threads running in JMeter will keep sending events to apex-pdp in every 500 milliseconds for the duration of 72 hours. |
| 1019 | |
| 1020 | Setup details |
| 1021 | ------------- |
| 1022 | |
| 1023 | The stability test is performed on VM's running in OpenStack cloud environment. There are 2 seperate VM's, one for running apex pdp & other one for running JMeter to simulate steady flow of transactions. |
| 1024 | |
| 1025 | **OpenStack environment details** |
| 1026 | |
| 1027 | Version: Mitaka |
| 1028 | |
| 1029 | **apex-pdp VM details** |
| 1030 | |
| 1031 | OS:Ubuntu 16.04.5 LTS |
| 1032 | |
| 1033 | CPU: 4 core |
| 1034 | |
| 1035 | RAM: 4 GB |
| 1036 | |
| 1037 | HardDisk: 40 GB |
| 1038 | |
| 1039 | Docker Version: 18.06.1-ce, build e68fc7a |
| 1040 | |
| 1041 | Java: openjdk version "1.8.0_181" |
| 1042 | |
| 1043 | **JMeter VM details** |
| 1044 | |
| 1045 | OS: Ubuntu 16.04.3 LTS |
| 1046 | |
| 1047 | CPU: 4 core |
| 1048 | |
| 1049 | RAM: 4 GB |
| 1050 | |
| 1051 | HardDisk: 40 GB |
| 1052 | |
| 1053 | Java: openjdk version "1.8.0_181" |
| 1054 | |
| 1055 | JMeter: 5.1.1 |
| 1056 | |
| 1057 | Install JMeter in virtual machine |
| 1058 | --------------------------------- |
| 1059 | |
| 1060 | Make the etc/hosts entries |
| 1061 | |
| 1062 | .. code-block:: bash |
| 1063 | |
| 1064 | echo $(hostname -I | cut -d\ -f1) $(hostname) | sudo tee -a /etc/hosts |
| 1065 | |
| 1066 | Make the DNS entries |
| 1067 | |
| 1068 | .. code-block:: bash |
| 1069 | |
| 1070 | echo "nameserver <PrimaryDNSIPIP>" >> sudo /etc/resolvconf/resolv.conf.d/head |
| 1071 | |
| 1072 | echo "nameserver <SecondaryDNSIP>" >> sudo /etc/resolvconf/resolv.conf.d/head |
| 1073 | |
| 1074 | resolvconf -u |
| 1075 | |
| 1076 | Update the ubuntu software installer |
| 1077 | |
| 1078 | .. code-block:: bash |
| 1079 | |
| 1080 | apt-get update |
| 1081 | |
| 1082 | Check & Install Java |
| 1083 | |
| 1084 | .. code-block:: bash |
| 1085 | |
| 1086 | apt-get install -y openjdk-8-jdk |
| 1087 | |
| 1088 | java -version |
| 1089 | |
| 1090 | Download & install JMeter |
| 1091 | |
| 1092 | .. code-block:: bash |
| 1093 | |
| 1094 | mkdir jMeter |
| 1095 | |
| 1096 | |
| 1097 | cd jMeter |
| 1098 | |
| 1099 | |
| 1100 | wget http://mirrors.whoishostingthis.com/apache//jmeter/binaries/apache-jmeter-5.1.1.zip |
| 1101 | |
| 1102 | |
| 1103 | unzip apache-jmeter-5.1.1.zip |
| 1104 | |
| 1105 | Install apex-pdp in virtual machine |
| 1106 | ----------------------------------- |
| 1107 | |
| 1108 | We will be running apex-pdp as docker container. So we need to first install docker and then create the container hosting apex-pdp by pulling the image from ONAP repository. |
| 1109 | |
| 1110 | **Docker Installation** |
| 1111 | |
| 1112 | 1. Make the etc/hosts entries |
| 1113 | |
| 1114 | .. code-block:: bash |
| 1115 | |
| 1116 | echo $(hostname -I | cut -d\ -f1) $(hostname) | sudo tee -a /etc/hosts |
| 1117 | |
| 1118 | 2. Make the DNS entries |
| 1119 | |
| 1120 | .. code-block:: bash |
| 1121 | |
| 1122 | echo "nameserver <PrimaryDNSIPIP>" >> sudo /etc/resolvconf/resolv.conf.d/head |
| 1123 | echo "nameserver <SecondaryDNSIP>" >> sudo /etc/resolvconf/resolv.conf.d/head |
| 1124 | resolvconf -u |
| 1125 | |
| 1126 | 3. Update the ubuntu software installer |
| 1127 | |
| 1128 | .. code-block:: bash |
| 1129 | |
| 1130 | apt-get update |
| 1131 | |
| 1132 | 4. Check and Install Java |
| 1133 | |
| 1134 | .. code-block:: bash |
| 1135 | |
| 1136 | apt-get install -y openjdk-8-jdk |
| 1137 | java -version |
| 1138 | |
| 1139 | Ensure that the Java version that is executing is OpenJDK version 8 |
| 1140 | |
| 1141 | 5. Check and install docker |
| 1142 | |
| 1143 | .. code-block:: bash |
| 1144 | |
| 1145 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - |
| 1146 | add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" |
| 1147 | apt-get update |
| 1148 | apt-cache policy docker-ce |
| 1149 | apt-get install -y docker-ce |
| 1150 | systemctl status docker |
| 1151 | docker ps |
| 1152 | |
| 1153 | 6. Change the permissions of the Docker socket file |
| 1154 | |
| 1155 | .. code-block:: bash |
| 1156 | |
| 1157 | chmod 777 /var/run/docker.sock |
| 1158 | |
| 1159 | 7. Check the status of the Docker service and ensure it is running correctly |
| 1160 | |
| 1161 | .. code-block:: bash |
| 1162 | |
| 1163 | service docker status |
| 1164 | docker ps |
| 1165 | |
| 1166 | **Install apex-pdp** |
| 1167 | |
| 1168 | Run the below command to create the container hosting apex-pdp by pulling the image from ONAP repository. |
| 1169 | |
| 1170 | .. code-block:: bash |
| 1171 | |
| 1172 | docker run -d --name apex -p 12561:12561 -p 23324:23324 -it nexus3.onap.org:10001/onap/policy-apex-pdp:2.1.0-latest /bin/bash -c "/opt/app/policy/apex-pdp/bin/apexApps.sh jmx-test -c /opt/app/policy/apex-pdp/examples/config/SampleDomain/RESTServerJsonEvent.json" |
| 1173 | docker ps |
| 1174 | |
| 1175 | Note: If you observe that requests from JMeter client is failing due to timeout, then modify the "RESTServerJsonEvent.json" mentioned in the above command and increase the "synchronousTimeout" property as per needed. |
| 1176 | |
| 1177 | Install & Configure VisualVM |
| 1178 | ---------------------------- |
| 1179 | |
| 1180 | VisualVM needs to be installed in the virtual machine having apex-pdp. It will be used to monitor CPU, Memory, GC for apex-pdp while stability test is running. |
| 1181 | |
| 1182 | Install visualVM |
| 1183 | |
| 1184 | .. code-block:: bash |
| 1185 | |
| 1186 | sudo apt-get install visualvm |
| 1187 | |
| 1188 | Login to docker container (using root) |
| 1189 | |
| 1190 | .. code-block:: bash |
| 1191 | |
| 1192 | docker exec -u 0 -it apex /bin/bash |
| 1193 | |
| 1194 | Run few commands to configure permissions |
| 1195 | |
| 1196 | .. code-block:: bash |
| 1197 | |
| 1198 | cd /usr/lib/jvm/java-1.8-openjdk/bin/ |
| 1199 | |
| 1200 | touch visualvm.policy |
| 1201 | |
| 1202 | vi visualvm.policy |
| 1203 | |
| 1204 | Add the following in visualvm.policy |
| 1205 | |
| 1206 | |
| 1207 | grant codebase "file:/usr/lib/jvm/java-1.8-openjdk/lib/tools.jar" { |
| 1208 | permission java.security.AllPermission; |
| 1209 | }; |
| 1210 | |
| 1211 | |
| 1212 | chmod 777 visualvm.policy |
| 1213 | |
| 1214 | |
| 1215 | exit |
| 1216 | |
| 1217 | Login to docker container (using normal user) |
| 1218 | |
| 1219 | .. code-block:: bash |
| 1220 | |
| 1221 | docker exec -it apex /bin/bash |
| 1222 | |
| 1223 | Run following commands to start jstatd using port 1111 |
| 1224 | |
| 1225 | .. code-block:: bash |
| 1226 | |
| 1227 | cd /usr/lib/jvm/java-1.8-openjdk/bin/ |
| 1228 | |
| 1229 | |
| 1230 | ./jstatd -p 1111 -J-Djava.security.policy=visualvm.policy & |
| 1231 | |
| 1232 | |
| 1233 | exit |
| 1234 | |
| 1235 | Login to VM using graphical interface in separate terminal window. |
| 1236 | |
| 1237 | .. code-block:: bash |
| 1238 | |
| 1239 | ssh -X <user>@<VM-IP-ADDRESS> |
| 1240 | |
| 1241 | Open visualVM |
| 1242 | |
| 1243 | .. code-block:: bash |
| 1244 | |
| 1245 | visualvm & |
| 1246 | |
| 1247 | Connect to jstatd & remote apex-pdp JVM |
| 1248 | |
| 1249 | 1. Right click on "Remote" in the left panel of the screen and select "Add Remote Host..." |
| 1250 | |
| 1251 | 2. Enter the IP address of apex-pdp docker container. |
| 1252 | |
| 1253 | .. code-block:: bash |
| 1254 | |
| 1255 | docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id |
| 1256 | |
| 1257 | 3. Right click on IP address, select "Add jstatd Connection..." |
| 1258 | 4. In "jstatd Connections" tab, enter port 1111 and click OK. |
| 1259 | 5. Right click on IP address, select "Add JMX Connection..." |
| 1260 | 6. Enter the apex-pdp docker container IP Address (from step 2) <IP address>:9911 ( for example - 172.17.0.2:9911) and click OK. |
| 1261 | 7. Double click on the newly added nodes under "Remote" to start monitoring CPU, Memory & GC. |
| 1262 | |
| 1263 | Sample Screenshot of visualVM |
| 1264 | |
| 1265 | .. image:: images/apex-s3p-vvm-sample.jpg |
| 1266 | |
| 1267 | Test Plan |
| 1268 | --------- |
| 1269 | |
| 1270 | The 72 hours stability test will run the following steps in 20 threaded loop. |
| 1271 | |
| 1272 | - **Send Input Event** - sends an input message to rest interface of apex-pdp. |
| 1273 | - **Assert Response Code** - assert the response code coming from apex-pdp. |
| 1274 | - **Assert Response Message** - assert the response message coming from apex-pdp. |
| 1275 | |
| 1276 | The following steps can be used to configure the parameters of test plan. |
| 1277 | |
| 1278 | - **HTTP Header Manager** - used to store headers which will be used for making HTTP requests. |
| 1279 | - **HTTP Request Defaults** - used to store HTTP request details like Server Name or IP, Port, Protocol etc. |
| 1280 | - **User Defined Variables** - used to store following user defined parameters. |
| 1281 | |
| 1282 | ================== ============================================================================ ============================ |
| 1283 | **Name** **Description** **Default Value** |
| 1284 | ================== ============================================================================ ============================ |
| 1285 | wait Wait time after each request (in milliseconds) 500 |
| 1286 | threads Number of threads to run test cases in parallel. 20 |
| 1287 | threadsTimeOutInMs Synchronization timer for threads running in parallel (in milliseconds). 5000 |
| 1288 | ================== ============================================================================ ============================ |
| 1289 | |
| 1290 | |
| 1291 | Download and update the jmx file presented in the apex-pdp git repository - `jmx file path <https://gerrit.onap.org/r/gitweb?p=policy/apex-pdp.git;a=tree;f=testsuites/apex-pdp-stability/src/main/resources;h=99d373033a190a690d4e05012bc3a656cae7bc3f;hb=refs/heads/master>`_. |
| 1292 | |
| 1293 | - HTTPSampler.domain - The ip address of VM which the apex container is running |
| 1294 | - HTTPSampler.port - The listening port, here is 23324 |
| 1295 | - ThreadGroup.druation - Set the duration to 72 hours (in seconds) |
| 1296 | |
| 1297 | Use the CLI mode to start the test |
| 1298 | |
| 1299 | .. code-block:: bash |
| 1300 | |
| 1301 | ./jmeter.sh -n -t ~/apexPdpStabilityTestPlan.jmx -Jusers=1 -l ~/stability.log |
| 1302 | |
| 1303 | Stability Test Result |
| 1304 | --------------------- |
| 1305 | |
| 1306 | **Summary** |
| 1307 | |
| 1308 | Stability test plan was triggered for 72 hours injecting input events to apex-pdp from 20 client threads running in JMeter. |
| 1309 | |
| 1310 | After the test stop, we can generate a HTML test report via command |
| 1311 | |
| 1312 | .. code-block:: bash |
| 1313 | |
| 1314 | ~/jMeter/apache-jmeter-5.1.1/bin/jmeter -g stability.log -o ./result/ |
| 1315 | |
| 1316 | ============================================== =================================================== ================================ ============= ============ |
| 1317 | **Number of Client Threads running in JMeter** **Number of Server Threads running in Apex engine** **Total number of input events** **Success %** **Error %** |
| 1318 | ============================================== =================================================== ================================ ============= ============ |
| 1319 | 20 4 6394602 99.999971% 0.0029% |
| 1320 | ============================================== =================================================== ================================ ============= ============ |
| 1321 | |
| 1322 | :download:`result.zip <zip/result.zip>` |
| 1323 | :download:`onap.zip <zip/onap.zip>` |
| 1324 | |
| 1325 | |
| 1326 | Setting up Performance Tests in APEX |
| 1327 | ++++++++++++++++++++++++++++++++++++ |
| 1328 | |
| 1329 | The apex-pdp has built in support for performance testing. A special performance testing REST server is available in the code base for performance testing. |
| 1330 | It is in the module `performance-benchmark-test <https://github.com/onap/policy-apex-pdp/tree/master/testsuites/performance/performance-benchmark-test>`_. |
| 1331 | To execute a benchmark test, you start the REST server, and then configure and run APEX against the server. |
| 1332 | There are example configurations for running tests in the `resources of this module <https://github.com/onap/policy-apex-pdp/tree/master/testsuites/performance/performance-benchmark-test/src/main/resources/examples/benchmark>`_. |
| 1333 | |
| 1334 | In order to run the test for 72 hours, set the batch count in the `EventGeneratorConfig.json <https://github.com/onap/policy-apex-pdp/blob/master/testsuites/performance/performance-benchmark-test/src/main/resources/examples/benchmark/EventGeneratorConfig.json>`_ file to zero, which causes the REST server to generate batches forever. |
| 1335 | |
| 1336 | Here is an example of how to do this: |
| 1337 | |
| 1338 | 1. Clone and build the apex-pdp git repo |
| 1339 | |
| 1340 | 2. Go into the performance-benchmark-test module and run the REST server |
| 1341 | |
| 1342 | .. code-block:: bash |
| 1343 | |
| 1344 | cd testsuites/performance/performance-benchmark-test |
| 1345 | mvn exec:java -Dexec.mainClass="org.onap.policy.apex.testsuites.performance.benchmark.eventgenerator.EventGenerator" -Dexec.args="-c src/main/resources/examples/benchmark/EventGeneratorConfig.json" |
| 1346 | |
| 1347 | 3. Separately, create a local directory and unzip the APEX tarball |
| 1348 | |
| 1349 | .. code-block:: bash |
| 1350 | |
| 1351 | mkdir apex |
| 1352 | cd apex |
| 1353 | tar zxvf ~/git/onap/policy/apex-pdp/packages/apex-pdp-package-full/target/*gz |
| 1354 | |
| 1355 | 4. Run APEX with a configuration that runs against the benchmark REST server, select the configuration that is appropriate for the number of threads for the number of cores on the host on which APEX is running. For example on a 32 core machine, select the "32" configuration, on an 8 core machine, select the "08" configuration. |
| 1356 | |
| 1357 | .. code-block:: bash |
| 1358 | |
| 1359 | bin/apexApps.sh engine -c ~/git/onap/policy/apex-pdp/testsuites/performance/performance-benchmark-test/src/main/resources/examples/benchmark/Javascript64.json |
| 1360 | |
| 1361 | 5. To get the test results, Issue the following command using CURL or from a browser(also can store the result into a file by setting outfile in the `EventGeneratorConfig.json <https://github.com/onap/policy-apex-pdp/blob/master/testsuites/performance/performance-benchmark-test/src/main/resources/examples/benchmark/EventGeneratorConfig.json>`_ file, statistics would be written into this file after event generator terminated) |
| 1362 | |
| 1363 | .. code-block:: bash |
| 1364 | |
| 1365 | curl http://localhost:32801/EventGenerator/Stats |
| 1366 | |
| 1367 | The results are similar to those below: |
| 1368 | |
| 1369 | :download:`Example APEX performance metrics <json/example-apex-perf.json>` |
| 1370 | |
| 1371 | Performance Test Result |
| 1372 | ----------------------- |
| 1373 | |
| 1374 | **Summary** |
| 1375 | |
| 1376 | Performance test was triggered for 2 hours on a 4 core, 4GB RAM virtual machine. |
| 1377 | |
| 1378 | **Test Statistics** |
| 1379 | |
| 1380 | :download:`Attached result log <json/result.json>` |
| 1381 | |
| 1382 | =============== ============= ================= ============== ===================== ================== ============= =========== |
| 1383 | **batchNumber** **batchSize** **eventsNotSent** **eventsSent** **eventsNotReceived** **eventsReceived** **Success %** **Error %** |
| 1384 | =============== ============= ================= ============== ===================== ================== ============= =========== |
| 1385 | 3650 182500 0 182500 0 182500 100 % 0 % |
| 1386 | =============== ============= ================= ============== ===================== ================== ============= =========== |
| 1387 | |
| 1388 | ======================== ========================= ======================== |
| 1389 | **averageRoundTripNano** **shortestRoundTripNano** **longestRoundTripNano** |
| 1390 | ======================== ========================= ======================== |
| 1391 | 40024623 7439158 5161374486 |
| 1392 | ======================== ========================= ======================== |
| 1393 | |
| 1394 | ============================ ============================= ============================ |
| 1395 | **averageApexExecutionNano** **shortestApexExecutionNano** **longestApexExecutionNano** |
| 1396 | ============================ ============================= ============================ |
| 1397 | 1335622 513650 5104326434 |
| 1398 | ============================ ============================= ============================ |