Release 3.3.4 boostrap

universalvesadaptor:1.3.0
datalake.exposure.service:1.1.1
datalakefeeder:1.1.1
prh-app-server:1.7.1
pm-mapper:1.7.1
Resource limit added for tca deployment

Change-Id: Icf1ff280a97ac8260521eecad00180e059edeb37
Signed-off-by: vv770d <vv770d@att.com>
Issue-ID: DCAEGEN2-2904
1 file changed
tree: efce00a9b09e6cec9bc8969f2caae55bc4d754a5
  1. blueprints/
  2. reference_templates/
  3. releases/
  4. scripts/
  5. .gitignore
  6. .gitreview
  7. Changelog.md
  8. Dockerfile
  9. INFO.yaml
  10. LICENSE.txt
  11. mvn-phase-script.sh
  12. pom.xml
  13. README.md
  14. version.properties
README.md

DCAE Blueprints and Bootstrap Container

This repository holds the source code needed to build the Docker image for the DCAE bootstrap container. The bootstrap container runs at DCAE deployment time (via a Helm chart) and does initial setup of the DCAE environment.

This repository also holds Cloudify blueprints for service components. The Docker build process copies these blueprints into the Docker image for the bootstrap container.

Note: Prior to the Frankfurt release (R6), this repository held blueprint templates for components deployed using Cloudify Manager. The build process for this repository expanded the templates and pushed them to the Nexus raw repository. The DCAE bootstrap container was hosted in the dcaegen2.deployments repository. The Docker build process for the bootstrap containter image pulled the blueprints it needed from the Nexus raw repository.

DCAE Bootstrap Container

This container is responsible for loading blueprints onto the DCAE inventory component. It also provides an environment for debugging any issues related to Cloudify deployments, since it has the Cloudify "cfy" command line tool available.

The Docker image build process loads blueprints into the image's file system. The blueprints are copied from the blueprints directory in this repository. At run time, the main script in the container (bootstrap.sh) uploads the blueprints to the DCAE inventory component.

The container expects to be started with two environment variables:

  • CMADDR -- the address of the target Cloudify Manager
  • CMPASS -- the password for Cloudify Manager

The container expects input files to use when deploying the blueprints. It expects to find them in /inputs. The normal method for launching the container is via a Helm Chart launched by OOM. That chart creates a Kubernetes ConfigMap containing the input files. The ConfigMap is mounted as a volume at /inputs.