commit | 11d86d637c806e4861a0f8b68024a0b5e38c883b | [log] [tgz] |
---|---|---|
author | vv770d <vv770d@att.com> | Mon Aug 17 17:55:49 2020 +0000 |
committer | vv770d <vv770d@att.com> | Mon Aug 17 17:55:59 2020 +0000 |
tree | dd42294254a65f18991e63fa6c7f5878301d9232 | |
parent | ee9fec7bb76f914b31d62e917003bfdf6a42c00a [diff] |
Release 2.0.3 bootstrap TCA/cdap removed from bootstrap Change-Id: I321045f571da5890541fc149e3a29eff3cdc5271 Signed-off-by: vv770d <vv770d@att.com> Issue-ID: DCAEGEN2-2342
This repository holds the source code needed to build the Docker image for the DCAE bootstrap container. The bootstrap container runs at DCAE deployment time (via a Helm chart) and does initial setup of the DCAE environment. This includes deploying several service components using Cloudify Manager.
This repository also holds Cloudify blueprints for service components. The Docker build process copies these blueprints into the Docker image for the bootstrap container.
Note: Prior to the Frankfurt release (R6), this repository held blueprint templates for components deployed using Cloudify Manager. The build process for this repository expanded the templates and pushed them to the Nexus raw repository. The DCAE bootstrap container was hosted in the dcaegen2.deployments
repository. The Docker build process for the bootstrap containter image pulled the blueprints it needed from the Nexus raw repository.
This container is responsible for loading blueprints onto the DCAE Cloudify Manager instance and for launching DCAE components.
The Docker image build process loads blueprints into the image's file system. The blueprints are copied from the blueprints
directory in this repository. At run time, the main script in the container (bootstrap.sh
) installs components using the blueprints.
The container expects to be started with two environment variables:
CMADDR
-- the address of the target Cloudify ManagerCMPASS
-- the password for Cloudify ManagerThe container expects input files to use when deploying the blueprints. It expects to find them in /inputs. The normal method for launching the container is via a Helm Chart launched by OOM. That chart creates a Kubernetes ConfigMap containing the input files. The ConfigMap is mounted as a volume at /inputs.