Update release notes.
Update documentations.

Issue-ID: DCAEGEN2-499
Change-Id: Ib315a444678ff73e85f2c7e212eab44037b59b02
Signed-off-by: Lusheng Ji <lji@research.att.com>
diff --git a/docs/index.rst b/docs/index.rst
index 253d7d6..c910a5c 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -3,7 +3,7 @@
 
 
 Data Collection, Analytics, and Events (DCAE)
-=======================
+=============================================
 
 .. Add or remove sections below as appropriate for the platform component.
 
@@ -13,11 +13,11 @@
    ./sections/architecture.rst
    ./sections/offeredapis.rst
    ./sections/consumedapis.rst
+   ./sections/installation.rst
    #./sections/delivery.rst
    ./sections/logging.rst
-   ./sections/installation.rst
    ./sections/configuration.rst
-   ./sections/administration.rst
+   #./sections/administration.rst
    ./sections/humaninterfaces.rst
    ./sections/components/component-development.rst
    ./sections/release-notes.rst
diff --git a/docs/sections/architecture.rst b/docs/sections/architecture.rst
index 1f9c5de..c8ec573 100644
--- a/docs/sections/architecture.rst
+++ b/docs/sections/architecture.rst
@@ -1,58 +1,80 @@
 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
 .. http://creativecommons.org/licenses/by/4.0
-
 Architecture
 ============
 
+Data Collection Analytics and Events (DCAE) is the data collection and analysis subsystem of ONAP.  Its tasks include collecting measurement, fault, status, configuration, and other types of data from network entities and infrastructure that ONAP interacts with, applying analytics on collected data, and generating intelligence (i.e. events) for other ONAP components such as Policy, APPC, and SDNC to operate upon; hence completing the ONAP's close control loop for managing network services and applications.
 
-Capabilities
-------------
-Data Collection Analytics and Events (DCAE) is the data collection and analysis subsystem of ONAP.
-Its functions include among other things the collection of FCAPs data from the network entitiess (VNFs, PNFs, etc).It provides also a framework for the  normalization of data format, the transportation of
-data, analysis of data, and generations of ONAP events which can be received by other ONAP components such as Policy for
-subsequent operations
-like closed loops.
-DCAE consists of DCAE Platform components and DCAE Services components.  The following list shows the details of what are included
-in ONAP R1
-When VM is indicated, it means that the components runs on its own VM on the platform.
-DCAE platform is based both on virtual machines (VM) and containers.
+The design of DCAE separates DCAE Services from DCAE Platform so that the DCAE system is flexible, elastic, and expansive enough for supporting the potentially infinite number of ways of constructing intelligent and automated control loops on distributed and heterogeneous infrastructure. 
+
+DCAE Service components are the virtual functional entities that realize the collection and analysis needs of ONAP control loops.  They include the collectors for various data collection needs, the analytics that assess collected data, and various auxiliary microservices that assist data collection and analytics, and support other ONAP functions.  Service components and DMaaP buses form the "data plane" for ONAP, where DCAE collected data is transported among different DCAE service components.
+
+On the other hand DCAE Platform components enable model driven deployment of service components and middleware infrastructures that service components depend upon, such as special storage and computation platforms.  That is, when triggered by an invocation call,  DCAE Platform follows the TOSCA model of the control loop that is specified by the triggering call, interacts with the underlying networking and computing infrastructure such as OpenSatck installations and Kubernetes clusters to deploy and configure the virtual apparatus (i.e. the collectors, the analytics, and auxiliary microservices) that are needed to form the control loop, at locations that are requested by the requirements of the control loop model.  DCAE Platform also provisions DMaaP topics and manages the distribution scopes of the topics following the prescription of the control loop model by interacting with controlling function of DMaaP.
+
+DCAE service components operate following a service discovery model.  A highly available and distributed service discovery and Key-Value store service, embodied by a Consul cluster, is the foundation for this approach.  DCAE components register they identities and service endpoint access parameters with the Consul service so that DCAE components can locate the API endpoint of other DCAE components by querying Consul with the well know service identities of other components.  
+
+During the registration process, DCAE components also register a health-check API with the Consul so that the operational status of the components are verified.  Consul's health check offers a separate path for DACE and ONAP to learn about module operation status that would still be applicable even when the underlying infrastructure does not provide native health-check methods.
+
+More over, Consul's distributed K-V store service is the foundation for DCAE to distribute and manage component configurations where each key is based on the unique identity of a DACE component, and the value is the configuration for the corresponding component.  DCAE platform creates and updates the K-V pairs based on information provided as part of the control loop blueprint, or received from other ONAP components such as Policy Framework and SDC.  Either through periodically polling or proactive pushing, the DCAE components get the configuration updates in realtime and apply the configuration updates.  DCAE Platform also offers dynamic template resolution for configuration parameters that are dynamic and only known by the DCAE platform, such as dynamically provisioned DMaaP topics.  
+
+
+DCAE R2 Components
+------------------
+
+The following list displays the details of what are included in ONAP DCAE R2.  All DCAE R2 components are offered as Docker containers.  Following ONAP level deployment methods, these components can be deployed as Docker containers running on Docker host VM that is launched by OpenStack Heat Orchestration Template; or as Kubernetes Deployments and Services by Helm.  
 
 - DCAE Platform
     - Core Platform
-        - Cloudify Manager (VM)
-        - Consul service (VM cluster)
+        - Cloudify Manager: TOSCA model executor.  Materializes TOSCA models of control loop, or Blueprints, into properly configured and managed virtual DCAE functional components.
     - Extended Platform
-        - Docker Host for containerized platform components (VM).  It runs the following DCAE platform micro services (containers).
-            - Configuration Binding Servive
-            - CDAP Broker
-            - Deployment Handler
-            - Policy Handler
-            - Service Change Handler
-            - DCAE Inventory-API
-        - CDAP Analytics Platform for CDAP analytics applications (VM cluster)
-        - Docker Host for containerized service components (VM)
-        - PostgreSQL Database (VM)
-
-note: the ONAP DCAEGEN2 CDAP blueprint deploys a 7 node CAsk Data Application Platform (CDAP) cluster (version 4.1.X), for running data analysis applications.
+        - Configuration Binding Service: Agent for service component configuration fetching; providing configuration parameter resolution.
+        - Deployment Handler: API for triggering control loop deployment based on control loop's TOSCA model.
+        - Policy Handler: Handler for fetching policy updates from Policy engine; and updating the configuration policies of KV entries in Consul cluster KV store for DCAE components.
+        - Service Change Handler: Handler for interfacing with SDC; receiving new TOSCA models; and storing them in DCAE's own inventory.
+        - DCAE Inventory-API: API for DCAE's TOSCA model store.
+    - Platform services
+        - Consul: Distributed service discovery service and KV store.
+        - Postgres Database: DCAE's TOSCA model store.
+        - Redis Database: DCAE's transactional state store, used by TCA for supporting persistence and seamless scaling.
 
 - DCAE Services
     - Collectors
-        - Virtual Event Streaming (VES) collector, containerized
-        - SNMP Trap collector, containerized
+        - Virtual Event Streaming (VES) collector
+        - SNMP Trap collector
     - Analytics
-        - Holmes correlation analytics, containerized
-        - Threshold Crosssing Analytics (TCA), CDAP analytics application
+        - Holmes correlation analytics
+        - Threshold Crosssing Analytics (TCA)
+    - Microservices
+        - PNF Registration Handler
+        - Missing Heartbeat analytics
+        - Universal Data Mapper service
+
+
+The fingure below shows the DCAE R2 architecture and how the components work with each other.  Among the components, blue boxes represent platform components; white boxes represent service components; purple boxes represent other ONAP components that DCAE Platform interfaces with; and orange pieces represent operator or operator like actors.
+
+.. image:: images/architecture.gif
+ 
+
+Deployment Scenarios
+--------------------
+
+Because DCAE service components are deployed on-demand following the control loop needs for managing ONAP deployed services, DCAE must support dynamic and on-demand deployment of service components based on ONAP control loop demands.  This is why all other ONAP components are launched from the ONAP level method, DCAE only deploys a subset of its components during this ONAP deployment process and rest of DCAE components will be deployed either as TOSCA executor launches a series of Blueprints, or deployed by control loop request originated from CLAMP, or even by operator manually invoking DCAE's deployment API call.
+
+For R2, ONAP supports two deployment methodologies: Heat Orchestration Template method, or Helm Chart method. No matter which method, DCAE is deployed following the same flow.  At its minimum, only the TOSCA model executor, the DCAE Cloudify Manager, needs to be deployed through the ONAP deployment process.  Once the Cloudify Manager is up and running, all the rest of DCAE platform can be deployed by a bootstrap script, which makes a number of calls into the Cloudify Manager API with Blueprints for various DCAE components, first the DCAE Platform components, then the service components that are needed for the built-in control loops, such as vFW/vDNS traffic throttling.  It is also possible that additional DCAE components are also launched as part of the ONAP deployment process using the ONAP level method instead of TOSCA model based method.
+
+More details of the DCAE R2 deployment will be covered by the Installation section.
 
 
 Usage Scenarios
 ---------------
 
-For ONAP R1 DCAE participates in all use cases.
+For ONAP R2 DCAE participates in the following use cases.
 
-vDNS/vFW:  VES collector, TCA analytics
-vCPE:  VES collector, TCA analytics
-vVoLTE:  VES collector, Holmes analytics
+- vDNS/vFW:  VES collector, TCA analytics
 
-Interactions
-------------
-DCAE is interfacing with the DMaaP(Data Movement as a Platform) message Bus
+- vCPE:  VES collector, TCA analytics
+
+- vVoLTE:  VES collector, Holmes analytics
+
+In addition, DCAE supports on-demand deployment and configuration of service components via CLAMP.  In such case CLAMP invokes the deployment and configuration of additional TCA instances.
+
diff --git a/docs/sections/images/architecture.gif b/docs/sections/images/architecture.gif
new file mode 100644
index 0000000..d5ade9e
--- /dev/null
+++ b/docs/sections/images/architecture.gif
Binary files differ
diff --git a/docs/sections/images/designate.gif b/docs/sections/images/designate.gif
deleted file mode 100644
index 8d6dff5..0000000
--- a/docs/sections/images/designate.gif
+++ /dev/null
Binary files differ
diff --git a/docs/sections/installation.rst b/docs/sections/installation.rst
index 4845dfe..eec69c3 100644
--- a/docs/sections/installation.rst
+++ b/docs/sections/installation.rst
@@ -1,11 +1,14 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
 DCAE Deployment (Installation)
-===============================
+==============================
 
 .. toctree::
    :maxdepth: 1
    :titlesonly:
 
    ./installation_heat.rst
-   ./installation_manual.rst
+   ./installation_oom.rst
    ./installation_test.rst
 
diff --git a/docs/sections/installation_heat.rst b/docs/sections/installation_heat.rst
index af6144f..2653d63 100644
--- a/docs/sections/installation_heat.rst
+++ b/docs/sections/installation_heat.rst
@@ -1,206 +1,91 @@
-OpenStack Heat Template Based ONAP Deployment
-=============================================
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
 
-For ONAP R1, ONAP is deployed using OpenStack Heat template.  DCAE is also deployed through this process.  This i document describes the details of the Heat template deployment process and how to configure DCAE related parameters in the Heat template and its parameter file.
+OpenStack Heat Orchestration Template Based DCAE Deployment
+===========================================================
+
+This document describes the details of the OpenStack Heat Orchestration Template deployment process and how to configure DCAE related parameters in the Heat template and its parameter file.
 
 
 ONAP Deployment Overview
 ------------------------
 
-ONAP supports an OpenStack Heat template based system deployment.  When a new "stack" is created using the template, the following virtual resources will be launched in the target OpenStack tenant:
+ONAP R2 supports an OpenStack Heat template based system deployment.  The Heat Orchestration Template file and its parameter input file can be found under the **heat/ONAP** directory of the **demo** repo.  
+
+When a new "stack" is created using the template, the following virtual resources will be launched in the target OpenStack tenant:
 
 * A four-character alphanumerical random text string, to be used as the ID of the deployment.  It is denoted as {{RAND}} in the remainder of this document.
 * A private OAM network interconnecting all ONAP VMs, named oam_onap_{{RAND}}.
 * A virtual router interconnecting the private OAM network with the external network of the OpenStack installation.
 * A key-pair named onap_key_{{RAND}}.
 * A security group named onap_sg_{{RAND}}.
-* A list of VMs for ONAP components. Each VM has one NIC connected to the OAM network and assigned a fixed IP. Each VM is also assigned a floating IP address from the external network. The VM hostnames are name consistently across different ONAP deployments, a user defined prefix, denoted as {{PREFIX}}, followed by a descriptive string for the ONAP component this VM runs, and optionally followed by a sub-function name.  In the parameter env file supplied when running the Heat template, the {{PREFIX}} is defined by the **vm_base_name** parameter.  The VMs of the same ONAP role across different ONAP deployments will always have the same OAM network IP address. For example, the Message Router will always have the OAM network IP address of 10.0.11.1.
+* A list of VMs for ONAP components. Each VM has one NIC connected to the OAM network and assigned a fixed IP. Each VM is also assigned a floating IP address from the external network. The VM hostnames are name consistently across different ONAP deployments, a user defined prefix, denoted as {{PREFIX}}, followed by a descriptive string for the ONAP component this VM runs, and optionally followed by a sub-function name.  In the parameter env file supplied when running the Heat template, the {{PREFIX}} is defined by the **vm_base_name** parameter.  The VMs of the same ONAP role across different ONAP deployments will always have the same OAM network IP address. For example, the Message Router will always have the OAM network IP address of 10.0.11.1.  The list below provides the IP addresses and hostnames for ONAP components
+that are relevant to DCAE.
 
     ==============     ==========================    ==========================
     ONAP Role          VM (Neutron) hostname          OAM IP address(s)
     ==============     ==========================    ==========================
     A&AI               {{PREFIX}}-aai-inst1          10.0.1.1
-    A&AI               {{PREFIX}}-aai-inst2          10.0.1.2
-    APPC               {{PREFIX}}-appc               10.0.2.1
     SDC                {{PREFIX}}-sdc                10.0.3.1
-    DCAE               {{PREFIX}}-dcae-bootstrap     10.0.4.1
-    SO                 {{PREFIX}}-so                 10.0.5.1
+    DCAE               {{PREFIX}}-dcae               10.0.4.1
     Policy             {{PREFIX}}-policy             10.0.6.1
     SD&C               {{PREFIX}}-sdnc               10.0.7.1
-    VID                {{PREFIX}}-vid                10.0.8.1
-    Portal             {{PREFIX}}-portal             10.0.9.1
     Robot TF           {{PREFIX}}-robot              10.0.10.1
     Message Router     {{PREFIX}}-message-router     10.0.11.1
     CLAMP              {{PREFIX}}-clamp              10.0.12.1
-    MultiService       {{PREFIX}}-multi-service      10.0.14.1
     Private DNS        {{PREFIX}}-dns-server         10.0.100.1
     ==============     ==========================    ==========================
 * Each of the above VMs will also be associated with a floating IP address from the external network.
-* A list of DCAE VMs, launched by the {{PREFIX}}-dcae-bootstrap VM.  These VMs are also connected to the OAM network and associated with floating IP addresses on the external network.  What's different is that their OAM IP addresses are DHCP assigned.  The table below lists the DCAE VMs that are deployed for R1 use stories.
-
-    =====================     ============================
-    DCAE Role                 VM (Neutron) hostname(s)
-    =====================     ============================
-    Cloudify Manager          {{DCAEPREFIX}}orcl{00}
-    Consul cluster            {{DCAEPREFIX}}cnsl{00-02}
-    Platform Docker Host      {{DCAEPREFIX}}dokp{00}
-    Service Docker Host       {{DCAEPREFIX}}doks{00}
-    CDAP cluster              {{DCAEPREFIX}}cdap{00-06}
-    Postgres                  {{DCAEPREFIX}}pgvm{00}
-    =====================     ============================
 
 
-DNS Configurations and Designate
---------------------------------
+DCAE Deployment
+---------------
 
-.. image:: images/designate.gif
+Within the Heat template yaml file, there is a section which specifies the DCAE VM as a "service".  Majority of the service block is the script that the VM will execute after being launched.  This is known as the "cloud-init" script.  This script writes configuration parameters to VM disk files under the /opt/config directory of the VM file system, one parameter per file, with the file names matching with the parameter names.  At the end, the cloud-init script invokes DCAE's installtioan script dcae2-install.sh, and DCAE deployment script dcae2_vm_init.sh.  While the dace2_install.sh script installs the necessary software packages, the dcae2_vm_init.sh script actually deploys the DCAE Docker containers to the DCAE VM.  
 
-When DCAE VMs are launched by the dcae-bootstrap VM, they obtain their OAM IP addresses from
-the DHCP server running on the OAM network (ONAP private network).  Because these addresses 
-are dynamic, DCAE VMs rely on the OpenStack **Designate** DNSaaS API for registering their 
-IP-address-to-hostname bindings.
+Firstly, during the execution of the dcae2_vm_init.sh script, files under the **heat** directory of the **dcaegen2/deployments** repo are downloaded and any templates in these files referencing the configuration files under the /opt/config directories are expanded by the contents of the corresponding files.  For example, a template of {{ **dcae_ip_addr** }} is replaced with the contents of the file /opt/config/**dcae_ip_addr**.txt file.  The resultant files are placed under the /opt/app/config directory of the DCAE VM file system.  
 
-DCAE VMs register their hostnames under a DNS zone.  This zone can be a zone that is exposed 
-to the global DNS hierarchy, or a zone that is only known to the ONAP deployment.  The actual
-zone name is configurable, by the blueprint input files of the DCAE VMs.  By default they are 
-set to be {{DCAE_ZONE}}.{{DOMAIN_NAME}}, where {{DCAE_ZONE}} is set to the {{RANDID}} and the 
-domain name is set to "dcaeg2.onap.org".  If DCAE VMs are required to be routable in operator organization or on the Internet, it is expected that the DNS domain is already configured in 
-the organizational or global DNS hierarchy.
+In addition, the dcae2_vm_init.sh script also calls the scripts to register the components with Consul about their health check APIs, and their default configurations.
 
-For OpenStack installations without Designate, there is an alternative "Proxyed-Designate"
-solution.  That is, a second OpenStack installation with Designate support is used for 
-providing Designate API and DNSaaS for the first OpenStack installation where ONAP is 
-deployed.  The Designate API calls from DCAE VMs are proxyed through the MultiCloud 
-service running in the same ONAP installation.  The diagram above illustrates the solution.
-Such a solution can be utilized by operators who have difficulties enhancing their existing 
-OpenStack infrastructure with Designate.  The ONAP Pod25 lab is configured using this 
-approach.
+Next, the dcae2_vm_init.sh script deploys the resources defined in the docker-compose-1.yaml and docker-compose-2.yaml files, with proper waiting in between to make sure the resource in docker-compose-1.yaml file have entered ready state before deploying the docker-compose-2.ayml file because the formers are the dependencies of the latter.  These resources are a number of services components and their minimum supporting platform components (i.e. Consul server and Config Binding Service).  With these resources, DCAE is able to provide a minimum configuration that supports the ONAP R2 use cases, namely, the vFW/vDNS, vCPE, cVoLTE use cases.  However, lacking the DCAE full platform, this configuration does not support CLAMP and Policy update from Policy Framework.  The only way to change the configurations of the service components (e.g. publishing to a different DMaaP topic) can only be accomplished by changing the value on the Consul for the KV of the service component, using Consul GUI or API call.
 
-To prepare for using the proxyed-Designate solution for an ONAP deployment, a surrogate 
-tenant needs to be set up in the Designate-providing OpenStack.  The name of the surrogate
-tenant must match with the name of the tenant where the ONAP is deployed.  At DCAE bootstrap
-time, the dcae2_vm_init.sh script will first register two records into A&AI, which contain
-parameters describing the two OpenStack installations, parameters that are needed by the MultiCloud service when performing the Designate and other API call proxying.
+For more complete deployment, the dcae2_vm_init.sh script further deploys docker-compose-3.yaml file, which deploys the rest of the DCAE platform components, and if configured so docker-compose-4.yaml file, which deploys DCAE R2 stretch goal service components such as PRH, Missing Heartbeat, etc.
 
-When DCAE VMs make OpenStack API calls, these calls are made to the MultiCloud service
-node instead, not to the underlying OpenStack cloud infrastructure.  For non-Designate 
-calls, the MultiCloud node proxys them to the same OpenStack installation and the project 
-where the ONAP is installed.  For Designate 
-calls, the MultiCloud node proxys to the Designate-providing OpenStack installation as if 
-such calls are for the surrogate tenant.  The result is that the Designate providing 
-OpenStack's backend DNS server will have the new records for DCAE VMs and their IP 
-addresses.  
+After all DCAE components are deployed, the dcae2_vm_init.sh starts to provide health check results.  Due to the complexity of the DCAE system, a proxy is set up for returning a single binary result for DCAE health check instead of having each individual DCAE component report its health status.  To accomplish this, the dcae2_vm_init.sh script deploys a Nginx reverse proxy then enters an infinite health check loop.  
 
-ONAP VMs deployed by Heat template are all registered with the private DNS server under the domain name of **simpledemo.onap.org**.  This domain can not be exposed to any where outside of the ONAP deployment because all ONAP deployments use the same domain name and same address space. Hence these host names remain only resolvable within the same ONAP deployment.  On the
-other hand DCAE VMs have host names under the DNS zone of **{{DCAE_ZONE}}.{{DOMAIN_NAME}}**, 
-which can be set up to be resolvable within organizational network or even global Internet.
+During each iteration of the loop, the script checks Consul's service health status API and compare the received healthy service list with a pre-determined list to assess whether the DACE system is healthy.  The list of services that must be healthy for the DCAE system to be assessed as healthy depends on the deployment profile which will be covered in the next subsection.  For example, if the deployment profile only calls for a minimum configuration for passing use case data, whether DCAE platform components such as Deployment Handler are heathy does not affect the result.  
 
-To make the hostnames of ONAP VMs and external servers (e.g. onap.org) resolvable, the 
-following DNS related configurations are needed.  
+If the DCAE system is considered healthy, the dcae2_vm_init.sh script will generate a file that lists all the healthy components and the Nginx will return this file as the body of a 200 response for any DCAE health check.  Otherwise, the Nginx will return a 404 response.
 
-* The ONAP deployment's private DNS server, 10.0.100.1, is the default resolver for all the VMS.  This is necessary to make the **simpledemo.onap.org** hostnames resolvable.
-* The ONAP deployment's private DNS server, 10.0.100.1, must have the Designate backend DNS server as the forwarder.  This is necessary to make the **{{DCAE_ZONE}}.{{DOMAIN_NAME}}** hostnames resolvable.
-* The Designate backend DNS server needs to be configured so it can resolve all global hostnames.  One exemplary configuration for achieving this is to have an external DNS server such as an organizational or global DNS server, e.g. Google's 8.8.8.8, as the forwarder.
-
-As the result of such configurations, below lists how different hostnames are resolved, as illustrated in the figure above:
-
-* For hostnames within the **simpledemo.onap.org** domain, the private DNS server at 10.0.100.1 has the bindings;
-* For hostnames within the **{{DCAE_ZONE}}.{{DOMAIN_NAME}}** domain, the private DNS server forwards to the Designate backend DNS server, which has the bindings;
-* For all other hostnames, e.g. ubuntu.org, the private DNS server forwards to the Designate backend DNS server, which then forwards to an external DNS server that has or is able to further forward request to a DNS server that has the bindings.
-
-We wil go over the details of related Heat template env parameters in the next section.
 
 Heat Template Parameters
 ------------------------
 
-Here we list Heat template parameters that are related to DCAE operation.  Bold values are the default values that should be used "as-is".
+In DCAE R2, the configuration for DCAE deployment in Heat is greatly simplified.  In addition to paramaters such as docker container image tags, the only parameter that configures DCAE deployment behavior is dcae_deployment_profiles.
 
-* public_net_id: the UUID of the external network where floating IPs are assigned from.  For example: 971040b2-7059-49dc-b220-4fab50cb2ad4.
-* public_net_name: the name of the external network where floating IPs are assigned from.  For example: external.
-* openstack_tenant_id: the ID of the OpenStack tenant/project that will host the ONAP deployment.  For example: dd327af0542e47d7853e0470fe9ad625.
-* openstack_tenant_name: the name of the OpenStack tenant/project that will host the ONAP deployment.  For example: Integration-SB-01.
-* openstack_username: the username for accessing the OpenStack tenant specified by openstack_tenant_id/openstack_tenant_name.
-* openstack_api_key: the password for accessing the OpenStack tenant specified by openstack_tenant_id/openstack_tenant_name.
-* openstack_auth_method: '**password**'.
-* openstack_region: '**RegionOne**'.
-* cloud_env: '**openstack**'.
-* dns_list: This is the list of DNS servers to be configured into DHCP server of the ONAP OAM network.  As mentioned above it needs to have the ONAP private DNS server as the first item, then one or more external DNS servers next, for example:  **["10.0.100.1", "8.8.8.8"]**.  For installations where the private DNS server VM takes too long to be set up, the solution is to use the Designate backend DNS server as the first entry in this list.  Fot example  **["10.12.25.5", "8.8.8.8"]**. 
-* external_dns: This is the first external DNS server in the list above.  For example, **"8.8.8.8"**
-* dns_forwarder:  This is the DNS forwarder for the ONAP private DNS server.  It must point to the IP address of the Designate backend DNS. For example **'10.12.25.5'** for the Integration Pod25 lab.
-* dcae_ip_addr: The static IP address on the OAM network that is assigned to the DCAE bootstraping VM.  **10.0.4.1**.  
-* dnsaas_config_enabled: Whether a proxy-ed Designate solution is used. For example: **true**.
-* dnsaas_region: The OpenStack region of the Designate-providing OpenStack installation. For example: **RegionOne**.
-* dnsaas_tenant_name: The surrogate tenant/project name of the Designate-providing OpenStack. It must match with the *openstack_tenant_name* parameter.  For example Integration-SB-01.  
-* dnsaas_keystone_url: The keystone URL of the Designate providing OpenStack.  For example **http://10.12.25.5:5000/v3**.
-* dnsaas_username: The username for accessing the surrogate tenant/project in Designate providing OpenStack.  For Pod25 Integration lab, this value is set to **demo**.
-* dnsaas_password: The password for accessing surrogate tenant/project in the Designate providing OpenStack.  For Pod25 Integration lab, this value is set to **onapdemo**.
-* dcae_keystone_url: This is the keystone API endpoint used by DCAE VMs.  If MultiCloud proxying is used, this parameter needs to provide the service endpoint of the MltiCloud service node: **"http://10.0.14.1/api/multicloud-titanium_cloud/v0/pod25_RegionOne/identity/v2.0"**. Otherwise it shall point to the keystone 2.0 API endpoint of the under-lying OpenStack installation.  
-* dcae_centos_7_image: The name of the CentOS-7 image.
-* dcae_domain: The domain under which DCAE VMs register their zone. For example: **'dcaeg2.onap.org'**.
-* dcae_public_key: the public key of the onap_key_{{RAND}} key-pair.
-* dcae_private_key: The private key of the onap_key_{{RAND}} key-pair (with the additions of  literal \n at the end of each line of text). 
+* dcae_deployment_profile: the parameter determines which DCAE components (containers) will be deployed.  The following profiles are supported for R2:
+    * R2MVP: This profile includes a minimum set of DACE components that will support the vFW/vDNS, vCPE. and vVoLTE use cases.  It will deploy the following components: 
+        * Consul server,
+        * Config Binding Service,
+        * Postgres database,
+        * VES collector
+        * TCA analytics
+        * Holmes rule management
+        * Holmes engine management.
+    * R2: This profile also deploys the rest of the DCAE platform.  With R2 deployment profile, DCAE supports CLAMP and full control loop functionalities.  These additional components are:
+        * Cloudify Manager,
+        * Deployment Handler,   
+        * Policy Handler,
+        * Service Change Handler,
+        * Inventory API.
+    * R2PLUS: This profile deploys the DCAE R2 stretch goal service components, namely:
+        * PNF Registration Handler,
+        * SNMP Trap collector,
+        * Missing Heartbeat Detection analytics,
+        * Universal Mapper.
 
 
 
-Heat Deployment
----------------
-
-Heat template can be deployed using the OpenStack CLI.  For more details, please visit the demo project of ONAP.  All files references in this secton can be found under the **demo** project.
-
-In the Heat template file **heat/ONAP/onap_openstack.yaml** file, there is one block of sepcification towrads the end of the file defines the dcae_bootstrap VM.  This block follows the same approach as other VMs defined in the same template.  That is, a number of parameters within the Heat context, such as the floating IP addresses of the VMs and parameters provided in the user defined parameter env file, are written to disk files under the /opt/config directory of the VM during cloud init time.  Then a script, found under the **boot** directory of the **demo** project, **{{VMNAME}}_install.sh**, is called to prepare the VM.  At the end of running this script, another script **{VMNAME}}_vm_init.sh** is called.
-
-For DCAE bootstrap VM, the dcae2_vm_init.sh script completes the following steps:
-
-* If we use proxy-ed Designate solution, runs:
-    * Wait for A&AI to become ready
-    * Register MultiCloud proxy information into A&AI
-    * Wait for MultiCloud proxy node ready
-    * Register the DNS zone for the ONAP installation, **{{RAND}}.dcaeg2.onap.org**
-* Runs DCAE bootstrap docker container
-    * Install Cloudify locally
-    * Launch the Cloudify Manager VM
-    * Launch the Consul cluster
-    * Launch the platform component Docker host
-    * Launch the service component Docker host
-    * Launch the CDAP cluster
-    * Install Config Binding Service onto platform component Docker host
-    * Launch the Postgres VM
-    * Install Platform Inventory onto platform component Docker host
-    * Install Deployment Handler onto platform component Docker host
-    * Install Policy Handler onto platform component Docker host
-    * Install CDAP Broker onto platform component Docker host
-    * Install VES collector onto service component Docker host
-    * Install TCA analytics onto CDAP cluster
-    * Install Holmes Engine onto service component Docker host
-    * Install Holmes Rules onto service component Docker host
-* Starts a Nginx docker container to proxy the healthcheck API to Consul
-* Enters a infinite sleep loop to keep the bootstrap container up
-
-
-Removing Deployed ONAP Deployment
----------------------------------
-
-Because DACE VMs are not deployed directly from Heat template, they need to be deleted using
-a separate method.
-
-* Ssh into the dcae-bootstrap VM
-* Enter the dcae-bootstrap container by executing: 
-    * **sudo docker exec -it boot /bin/bash**
-* Inside of the bootstrap container, execute:
-    * **bash ./teardown**
-    * All DCAE assets deployed by the bootstrap container will be uninstalled in the reverse order that they are installed.
-* Exit from the bootstrap container.
-
-After all DCAE assets are deleted, the next step is to delete the ONAP stack, using either the
-dashboard GUI or openstack CLI.
-
-When VMs are not terminated in a graceful fashion, certain resources such as ports and floating
-IP addresses may not be released promptly by OpenStack.  One "quick-nad-dirty" way to release 
-these resources is to use the openstack CLI with the following commands::
-
-    openstack port list |grep 'DOWN' |cut -b 3-38 |xargs openstack port delete
-    openstack floating ip list |grep 'None' |cut -b 3-38 |xargs openstack floating ip delete
-
 
 Tips for Manual Interventions
 -----------------------------
@@ -209,13 +94,20 @@
 
 * Running dcae2_install.sh
 * Running dcae2_vm_init.sh
-* Running the dcae bootstrap docker.
+* Individual docker-compose-?.yaml file
 
-All these require ssh-ing into the dcae-botstrap VM, then change directory or /opt and sudo.  
+All these require ssh-ing into the dcae VM, then change directory or /opt and sudo.  
 Configurations injected from the Heat template and cloud init can be found under /opt/config.
 DCAE run time configuration values can be found under /opt/app/config.  After any parameters are changed, the dcae2_vm_init.sh script needs to be rerun.
 
-Some manual interventions also require interaction with the OpenStack environment.  This can be 
+Redpeloying/updating resources defines in docker-compose-?.yaml files can be achieved by running the following:
+
+   $ cd /opt/app/config
+   $ /opt/docker/docker-compose -f ./docker-compose-4.yaml down
+   $ /opt/docker/docker-compose -f ./docker-compose-4.yaml up -d
+
+
+Some manual interventions may also require interaction with the OpenStack environment.  This can be 
 done by using the OpenStack CLI tool.  OpenStack CLI tool comes very handy for various uses in deployment and maintenance of ONAP/DCAE.  
 
 It is usually most convenient to install OpenStack CLI tool in a Python virtual environment.  Here are the steps and commands::
@@ -236,27 +128,6 @@
     # list all tenants
     (openstackcli) $ openstack project list
 
-Designate/DNS related operations::
-
-    # DNS/Designate related commands
-    # list all DNS zones
-    (openstackcli) $ openstack zone list
-    # create a new zone
-    (openstackcli) $ openstack zone create ${ZONENAME} --email dcae@onap.org
-    # delete an existing zone
-    (openstackcli) $ openstack zone delete ${ZONENAME}
-
-Note that depending on OpenStack configuration, there may be a quota for how many zones can be created
-under each tenant.  If such limit is reached, further zone creation request will be rejected.  In this case manual deletions for zones no longer needed is one of the ways to reduce outstanding zones.
-
-When VMs are not terminated in a graceful fashion, certain resources such as ports and floating
-IP addresses may not be released properly by OpenStack.  One "quick-nad-dirty" way to release
-these resources is to use the openstack CLI with the following commands::
-
-    (openstackcli) $ openstack port list |grep 'DOWN' |cut -b 3-38 |xargs openstack port delete
-    (openstackcli) $ openstack floating ip list |grep 'None' |cut -b 3-38 |xargs openstack floating ip delete
-   
-
 Finally to deactivate from the virtual environment, run::
 
     (openstackcli) $ deactivate
diff --git a/docs/sections/installation_oom.rst b/docs/sections/installation_oom.rst
new file mode 100644
index 0000000..878214e
--- /dev/null
+++ b/docs/sections/installation_oom.rst
@@ -0,0 +1,145 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+Helm Chart Based DCAE Deployment
+================================
+
+This document describes the details of the Helm Chart based deployment process for R2 ONAP and how DCAE is deployed through this process.
+
+
+ONAP Deployment Overview
+------------------------
+
+ONAP R2 supports Kubernetes deployment.  Kuberenetes is a container orchestration technology that organizes containers into composites of various patterns for easy deployment, management, and scaling.  R2 ONAP utilizes Kubernetes as the foundation for fulfilling its platform maturity promises.
+
+Further, R2 ONAP manages Kubernetes specifications using Helm Charts, under which all Kuberentes yaml-formatted resource specifications and additional files are organized into a hierarchy of charts, sub-charts, and resources.  These yaml files are further augmented with Helm's templating, which makes dependencies and cross-references of parameters and parameter derivatives among resources manageable for a large and complex Kuberentes system such as ONAP.
+
+At deployment time, with a single **helm install** command, Helm resolves all the templates and compiles the chart hierarchy into Kubernetes resource definitions, and invokes Kubernetes deployment operation for all the resources.
+
+All ONAP Helm Charts are organized under the **kubernetes** directory of the **OOM** project, where roughly each ONAP component occupied a subdirectory.  DCAE charts are placed under the **dcaegen2** directory.  DCAE Kubernetes deployment is based on the same set of Docker containers that the Heat based deployment uses, with the exception of bootstrap container and health check container are only used in Kubernetes deployment.
+
+
+DCAE Chart Organization
+-----------------------
+
+Following Helm conventions, each Helm chart directory usually consists of the following files and subdirectories:
+
+* Chart.yaml: meta data;
+* requirements.yaml: dependency charts;
+* values.yaml: values for Helm templating engine to expand templates;
+* resources: subdirectory for additional resource definitions such as configuration, scripts, etc;
+* templates: subdirectory for Kubernetes resource definition templates;
+* charts: subdirectory for sub-charts.
+
+The dcaegen2 chart has the following sub-charts:
+* dcae-bootstrap: a Kubernetes job that deploys additional DCAE components;
+* dcae-cloudify-manager: a Kubernetes deployment of a Cloudify Manager;
+* dcae-healthcheck: a Kubernetes deployment that provides a DCAE health check API;
+* dcae-redis: a Kubernetes deployment of a Redis cluster.
+
+
+DCAE Deployment
+---------------
+
+At deployment time, when the **helm install** command is executed, all DCAE resources defined within charts under the OOM Chart hierarchy are deployed.  They are the 1st order components, namely the Cloudify Manager deployment, the Health Check deployment, the Redis cluster deployment, and the Bootstrap job.  In addition, a Postgres database deployment is also launched, which is specified as a dependency of the DCAE Bootstrap job.  These resources will show up as the following, where the name before / indicates resource type and the term "dev" is a tag that **helm install** command uses as "release name":
+  * deploy/dev-dcae-cloudify-manager;
+  * deploy/dev-dcae-healthcheck;
+  * statefulsets/dev-dcae-redis;
+  * statefulsets/dev-dcae-db;
+  * job/dev-dcae-bootstrap.
+
+In addition, DCAE operations depends on a Consul server cluster.  For ONAP OOM deployment, since Consul cluster is provided as a shared resource, its charts are defined under the consul direcory, not part of DCAE charts. 
+
+The dcae-bootstrap job has a number of prerequisites because the subsequently deployed DCAE components depends on a number of resources having entered their normal operation state.  DCAE bootstrap job will not start before these resources are ready.  They are:
+  * dcae-cloudify-manager;
+  * consul-server;
+  * msb-discovery;
+  * kube2msb.
+
+Once started, the DCAE bootstrap job will call Cloudify Manager to deploy a series of Blueprints which specify the additional DCAE R2 components.  These Blueprints are almost identical to the Docker container Blueprints used by DACE R1 and Heat based R2 deployment, except that they are using the k8splugin instead of dockerplugin.  The k8splugin is a major contribution of DCAE R2.  It is a Cloudify Manager plugin that is capable of expanding a Docker container node definition into a Kubernetes deployment definition, with enhancements such as replica scaling, ONAP logging sidecar, MSB registration, etc.
+
+The additional DCAE components launched into ONAP deployment are:
+  * deploy/dep-config-binding-service;
+  * deploy/dep-dcae-tca-analytics;
+  * deploy/dep-dcae-ves-collector;
+  * deploy/dep-deployment-handler;
+  * deploy/dep-holmes-engine-mgmt;
+  * deploy/dep-holmes-rule-mgmt;
+  * deploy/dep-inventory;
+  * deploy/dep-policy-handler;
+  * deploy/dep-pstg-write;
+  * deploy/dep-service-change-handler.
+
+
+DCAE Configuration
+------------------
+
+Deployment time configuration of DCAE components are defined in several places.
+
+  * Helm Chart templates:
+     * Helm/Kubernetes template files can contain static values for configuration parameters;
+  * Helm Chart resources:
+     * Helm/Kubernetes resources files can contain static values for configuration parameters;
+  * Helm values.yaml files:
+     * The values.yaml files supply the values that Helm templating engine uses to expand any templates defined in Helm templates;
+     * In a Helm chart hierarchy, values defined in values.yaml files in higher level supersedes values defined in values.yaml files in lower level;
+     * Helm command line supplied values supersedes values defined in any values.yaml files.
+
+In addition, for DCAE components deployed through Cloudify Manager Blueprints, their configuration parameters are defined in the following places:
+     * The Blueprint files can contain static values for configuration parameters;
+        * The Blueprint files are defined under the blueprints directory of the dcaegen2/platform/blueprints repo, named with "k8s" prefix.
+     * The Blueprint files can specify input parameters and the values of these parameters will be used for configuring parameters in Blueprints.  The values for these input parameters can be supplied in several ways as listed below in the order of precedence (low to high):
+        * The Blueprint files can define default values for the input parameters;
+        * The Blueprint input files can contain static values for input parameters of Blueprints.  These input files are provided as config resources under the dcae-bootstrap chart;
+        * The Blueprint input files may contain Helm templates, which are resolved into actual deployment time values following the rules for Helm values.
+
+
+Now we walk through an example, how to configure the Docker image for the Policy Handler which is deployed by Cloudify Manager.  
+
+In the k8s-policy_handler.yaml Blueprint, the Docker image to use is defined as an input parameter with a default value:
+  **policy_handler_image**:
+    description: Docker image for policy_handler
+    default: 'nexus3.onap.org:10001/onap/org.onap.dcaegen2.platform.policy-handler:2.4.3'
+
+Then in the input file, oom/kubernetes/dcaegen2/charts/dcae-bootstrap/resources/inputs/k8s-policy_handler-inputs.yaml, it is defined again as:
+  **policy_handler_image**: {{ include "common.repository" . }}/{{ .Values.componentImages.policy_handler }}
+
+Thus, when common.repository and componentImages.policy_handler are defined in the values.yaml files, their values will be plugged in here and the composition policy_handler_image will be passed to Policy Handler Blueprint as the Docker image tag to use instead of the default value in the Blueprint.
+
+Indeed the componentImages.ves value is provided in the oom/kubernetes/dcaegen2/charts/dcae-bootstrap/values.yaml file:
+  componentImages:
+    policy_handler: onap/org.onap.dcaegen2.platform.policy-handler:2.4.5
+
+The final result is that when DCAE bootstrap calls Cloudify Manager to deploy Policy Handler, the 2.4.5 image will be deployed.
+
+DCAE Service Endpoints
+----------------------
+
+Below is a table of default hostnames and ports for DCAE component service endpoints in Kuubernetes deployment:
+    ==================   ============================      ================================
+    Component            Cluster Internal (host:port)      Cluster external (svc_name:port)
+    ==================   ============================      ================================
+    VES                  dcae-ves-collector:8080           xdcae-ves-collector.onap:30235
+    TCA                  dcae-tca-analytics:11011          xdcae-tca-analytics.onap:32010
+    Policy Handler       policy-handler:25577              NA
+    Deployment Handler   deployment-handler:8443           NA
+    Inventory            inventory:8080                    NA
+    Config binding       config-binding-service:10000      NA
+    DCAE Healthcheck     dcae-healthcheck:80               NA
+    Cloudify Manager     dcae-cloudify-manager:80          NA
+    ==================   ============================      ================================
+
+In addition, a number of ONAP service endpoints that are used by DCAE components are listed as follows for reference by DCAE developers and testers:
+    ==================   ============================      ================================
+    Component            Cluster Internal (host:port)      Cluster external (svc_name:port)
+    ==================   ============================      ================================
+    Consul Server        consul-server:8500                consul-server:30270
+    Robot                robot:88                          robot:30209 TCP
+    Message router       message-router:3904               message-router:30227
+    Message router       message-router:3905               message-router:30226
+    MSB Discovery        msb-discovery:10081               msb-discovery:30281
+    Logging              log-kibana:5601                   log-kibana:30253
+    AAI                  aai:8080                          aai:30232
+    AAI                  aai:8443                          aai:30233
+    ==================   ============================      ================================
+
diff --git a/docs/sections/release-notes.rst b/docs/sections/release-notes.rst
index 87808e4..dadfbaa 100644
--- a/docs/sections/release-notes.rst
+++ b/docs/sections/release-notes.rst
@@ -9,11 +9,12 @@
 :Release Date: 2018-06-07
 
 **New Features**
+
 DCAE R2 improves upon previous release with the following new features:
 
-- Kubernetes deployment support for DCAE.  In R2 all DCAE components can be deployed using Kubernetes into a Kubernetes cluster.  The list of a R2 DCAE include the following components.
+- All DCAE R2 components are delivered as Docker container images.  The list of components is as follows. 
     - Platform components
-        - Cloudify Manager (Community Version 18.3.23)
+        - Cloudify Manager
         - Bootstrap container
         - Configuration Binding Service
         - Deployment Handler
@@ -23,21 +24,27 @@
     - Service components
         - VES Collector
         - SNMP Collector
-        - Mapper Microservice
-        - PNF Registration Handler Microservice
-        - Missing Heartbeat Microservice
         - Threshold Crossing Analytics
-        - Holmes Rule Management*
-        - Holmes Engine Management*
+        - Holmes Rule Management *
+        - Holmes Engine Management *
+    - Additional resources that DCAE utilizes:
+        - Postgres Database
+        - Redis Cluster Database
+        - Consul Cluster
+    Notes:
+        \*  These components are delivered by the Holmes project and used as a DCAE analytics component in R2.
 
-(*) Note: This component is delivered under the Holmes project and used as a DCAE analytics component in R2.
+- DCAE R2 supports both OpenStack Heat Orchestration Template based deployment and Helm Chart based deployment. 
 
-In addition, DCAE R2 utilizes the following shared resources that are provided by OOM ONAP deployment:
-    - Postgres Database
-    - Redis Cluster Database
-    - Consul Cluster
+    - Under Heat based deployment all DCAE component containers are deployed onto a single Docker host VM that is launched from an OpenStack Heat Orchestration Template as part of "stack creation".
+    - Under Helm/Kubernetes deployment all DCAE component containers are deployed as Kubernetes Pods/Deployments/Services into Kubernetes cluster.
 
-All DCAE components are designed to support platform maturity requirements.
+- DCAE R2 includes a new Cloudify Manager plugin (k8splugin) that is capable of expanding a Blueprint node specification written for Docker container to a full Kubernetes specification, with additional enhancements such as replica scaling, sidecar for logging to ONAP ELK stack, registering services to MSB, etc. 
+
+- All DCAE components are designed to support platform maturity requirements.
+
+
+**Source Code**
 
 Source code of DCAE components are released under the following repositories on gerrit.onap.org:
     - dcaegen2
@@ -65,19 +72,50 @@
 
 **Known Issues**
 
+- DCAE utilizes Cloudify Manager as its declarative model based resource deployment engine.  Cloudify Manager is an open source upstream technology provided by Cloudify Inc. as a Docker image.  DCAE R2 does not provide additional enhancements towards Cloudify Manager's platform maturity.
+
 **Security Notes**
 
 DCAE code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The DCAE open Critical security vulnerabilities and their risk assessment have been documented as part of the `project <https://wiki.onap.org/pages/viewpage.action?pageId=28377647>`_.
 
 Quick Links:
- 	- `DCAE project page <https://wiki.onap.org/display/DW/Data+Collection+Analytics+and+Events+Project>`_
- 	
- 	- `Passing Badge information for DCAE <https://bestpractices.coreinfrastructure.org/en/projects/1718>`_
- 	
- 	- `Project Vulnerability Review Table for DCAE <https://wiki.onap.org/pages/viewpage.action?pageId=28377647>`_
+        - `DCAE project page <https://wiki.onap.org/display/DW/Data+Collection+Analytics+and+Events+Project>`_
+
+        - `Passing Badge information for DCAE <https://bestpractices.coreinfrastructure.org/en/projects/1718>`_
+
+        - `Project Vulnerability Review Table for DCAE <https://wiki.onap.org/pages/viewpage.action?pageId=28377647>`_
+
+
 
 **Upgrade Notes**
 
+The following components are upgraded from R1:
+    - Cloudify Manager:
+       - Docker container tag: onap/org.onap.dcaegen2.deployments.cm-container:1.3.0
+       - Description: R2 DCAE's Cloudify Manager container is based on Cloudify Manager Community Version 18.2.28, which is based on Cloudify Manager 4.3.
+    - Bootstrap container: 
+       - Docker container tag: onap/org.onap.dcaegen2.deployments.k8s-bootstrap-container:1.1.11
+       - Description: R2 DCAE no longer uses bootstrap container for Heat based deployment, -- deployment is done through cloud-init scripts and docker-compose specifications.  The bootstrap is for Helm/Kubernetes based deployment.
+    - Configuration Binding Service: 
+       - Docker container tag: onap/org.onap.dcaegen2.platform.configbinding:2.1.5
+       - Description: Configuration Binding Sevice now supports the new configuration policy format.
+    - Deployment Handler
+       - Docker container image tag: onap/org.onap.dcaegen2.platform.deployment-handler:2.1.5
+    - Policy Handler
+       - Docker container image tag: onap/org.onap.dcaegen2.platform.policy-handler:2.4.5
+       - Description: Policy Handler now supports the new configuration policy format.
+    - Service Change Handler
+       - Docker container image tag: onap/org.onap.dcaegen2.platform.servicechange-handler:1.1.4
+       - Description: Refactoring.
+    - Inventory API
+       - Docker container image tag: onap/org.onap.dcaegen2.platform.inventory-api:3.0.1
+       - Description: Refactoring.
+    - VES Collector
+       - Docker container image tag: onap/org.onap.dcaegen2.collectors.ves.vescollector:1.2.0
+    - Threshold Crossing Analytics
+       - Docker container image tag: onap/org.onap.dcaegen2.deployments.tca-cdap-container:1.1.0
+       - Description: Replaced Hadoop VM Cluster based file system with regular host file system; repackaged full TCA-CDAP stack into Docker container; transactional state separation from TCA in-memory to off-node Redis cluster for supporting horizontal scaling.
+
 
 
 Version: 1.0.0