Last update: 2020-03-09
Note for Frankfurt Release (R6): The proxy server for remote sites relies on having access from the remote site to the config-binding-service server at the central site. Prior to R6, we accomplished this by configuring a NodePort service on the central site exposing the config-binding-service http port (10000) and the https (10443) port. In R6, by default, we configure a ClusterIP service for config-service-service. This prevents the http port from being exposed outside the central site Kubernetes Cluster.
In addition, R6 changed how components get certificate for TLS. In prior releases, components that needed a certificate (a server certificate or just a CA certificate to use to validate servers) got the certificate using an init container (org.onap.dcaegen2.deployments.tls-init-container, version 1.0.3) that has the certificates "baked in" to the container image. In R6, the init container (org.onap.dcaegen2.deployments.tls-init-container, version 2.1.0) executes code that pulls a certificate from AAF. This will not work from a remote site because the necessary AAF services are not exposed there. We expect that work will be done for R7 to remedy this.
In the meantime, to use a remote, it will be necessary to deploy DCAE at the central site with these changes:
1. Override dcaegen2.dcae-config-binding-service.service.type. Set it to "NodePort", overriding the current setting of "ClusterIP".
2. Override global.tlsImage. Set it to "onap/org.onap.dcaegen2.deployments.tls-init-container:1.0.3". This will use the container with "baked in" certificates.
3. Make sure all blueprints import "https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/R6/k8splugin/1.7.2/k8splugin_types.yaml",i.e., they need to use version 1.7.2 of the k8s plugin. (The blueprints loaded into inventory at deployment time currently meet this requirement.
We expect significant changes to multi-site support in R7.
Note that as of this update, there has been no testing of multi-site support in R6.
Beginning with the ONAP Dublin release, DCAE allows for deploying data collection and analytics components into a remote site--specifically, into a Kubernetes cluster other than the central site cluster where the main ONAP and DCAE platform components are deployed. A proxy server is deployed into each remote cluster to allow components running in the remote cluster to access DCAE platform components in the central site. DCAE components running in a remote site can address platform components at the central site as if the platform components were running in the remote site.
A presentation describing DCAE support for remote sites in the Dublin release can be found on the ONAP Developer Wiki.
This repository contains a Helm chart that deploys and configures a proxy server into a Kubernetes cluster and creates Kubernetes Services that route traffic through the proxy to DCAE platform components (specifically, the Consul server, the config binding service, the logstash service, the DMaaP message router server, and the DMaaP data router server). The exact set of services and the port mappings are controlled by the values.yaml
file in this chart.
In order to use the chart in this repo to deploy a proxy server into a remote Kubernetes cluster:
tiller
) installed on it. Nothing else should be installed on the remote cluster.helm
client command line installed on a machine (the "installation machine") that can connect to the new remote cluster and has configured the helm
client to use the cluster information for the new remote cluster.common
chart, version 4.x, must be available on a local Helm repository running on the installation machine from which the chart will be deployed. (It is possible to change the requirements.yaml
file to specify a different source for the common
chart.)helm dep up
in the chart's directory on the local file system of the installation machine.Note: These instructions assume that the user is familiar with deploying systems using Helm. Users who have deployed ONAP using Helm will be familiar with these procedures. Using Helm (and Kubernetes) with multiple clusters might be unfamiliar to some users. These instructions assume that users rely on contexts defined in their local kubeconfig
file to manage access to multiple Kubernetes clusters. See this overview for more information.
The values.yaml
file delivered in this repository provides:
The values.yaml
file provides sensible default values for items (1) and (2) above, but for some applications there may be reasons to change them. (For instance, it might be necessary to proxy additional central site services. Or perhaps the external port assignments for central services are different from the standard ONAP assignments.)
The values.yaml
file does not provide sensible defaults for item (3),the IP addresses for the central site Kubernetes nodes, because they're different for every ONAP installation. values.yaml
supplies a single node with the local loopback address (127.0.0.1
), which almost certainly won't work. It's necessary to override the default value when deploying this chart. The property to override is nodeAddresses
, which is an array of IP addresses or fully-qualified domain names (if containers at the remote site are configured to use a DNS server that can resolve them). Users of this chart can either modify the values.yaml
directly or (better) provide an override file. For instance, for an installation where the central site cluster has three nodes at 10.10.10.100
, 10.10.10.101
, and 10.10.10.102
, an override file would look like this:
nodeAddresses: - 10.10.10.100 - 10.10.10.101 - 10.10.10.102
It's important to remember that nodeAddresses
are the addresses of the nodes in the central site cluster, not the addresses of the nodes in the remote cluster being installed. The nodeAddresses
are used to configure the proxy so that it can reach the central cluster. If more than one address is supplied, the proxy will distribute requests at random across the addresses.
The helm install
command can be used to deploy this chart. The exact form of the command depends on the environment.
For example, assuming:
onap-dcae-remote-site
,node-addresses.yaml
in the current working directory,kubeconfig
under the context named site-00
,onap
,dcae-remote-00
,then the following command would deploy the proxy into the remote site:
helm --kube-context site-00 install \ -n dcae-remote-00 --namespace onap \ -f ./node-addresses.yaml ./onap-dcae-remote-site
When the chart is installed, it creates the following Kubernetes entities:
A Kubernetes ConfigMap holding the nginx
configuration file. The content of the file is sourced from resources/config/nginx.conf
in this repository. The ConfigMap will be named helm-release-dcae-remote-site-proxy-configmap, where helm-release is the Helm release name specified in the helm install
command.
A Kubernetes ConfigMap holding two proxy configuration files:
http
connections, sourced from resources/config/proxies/http-proxies.conf
.tcp
and https
connections), sourced from resources/config/proxies/stream-proxies.conf
.The ConfigMap will be named helm-release-dcae-remote-site-proxy-proxies-configmap, where helm-release is the Helm release name specified in the helm install
command.
A Kubernetes Deployment, with a single Kubernetes Pod running a container with the nginx
proxy server. The Pod mounts the files from the ConfigMaps into the container file system at the locations expected by nginx
. The Deployment will be named helm-release-dcae-remote-site-proxy, where helm-release is the Helm release name specified in the helm install
command.
A Kubernetes ClusterIP Service for the nginx
proxy. The Service will be named dcae-remote-site-proxy
.
A collection of Kubernetes ClusterIP Services, one for each service name listed in the proxiedServices
array in the values.yaml
file. (To allow for service name aliases, each proxied service can specify an array of service names. In the default set of services in values.yaml
, consul has two names: consul
and consul-server
, because, for historical reasons, some components use one name and some use the other.) These services all route to the nginx
proxy.
The first step in verifying that the remote proxy has been installed correctly is to verify that the expected Kubernetes entities have been created. The kubectl get
command can be used to do this.
For example, using the same assumptions about the environment as we did for the example deployment command above:
kubectl --context site-00 -n onap get deployments
should show the Kubernetes deployment for the nginx
proxy server.kubectl --context site-00 -n onap get pods
should show a pod running the nginx
container.kubectl --context site-00 -n onap get configmaps
should show the two ConfigMaps.kubectl --context site-00 -n onap get services
should show the proxy service as well as all of the services from the proxiedServices
list in values.yaml
.To check that the proxy is properly relaying requests to services running on the central site, use kubectl exec
to launch a shell in the nginx
container. Using the example assumptions as above:
Use kubectl --context site-00 -n onap get pods
to get the name of the pod running the nginx
proxy.
Use kubectl --context site-00 -n onap exec -it
nginx_pod_name /bin/bash
to enter a shell on the nginx container.
The container doesn't have the curl
and nc
commands that we need to check connectivity. To install them, run the following commands from the shell in the container:
apt-get update
apt-get install curl
apt-get install netcat
Check the HTTP and HTTPS services by attempting to access them using curl
. Assuming the deployment used the default list of services from values.yaml
, use the following commands:
curl -v http://consul:8500/v1/agent/members
curl -v http://consul-server:8500/v1/agent/members
curl -v http://config-binding-service:10000/service_component/k8s-plugin
curl -v -H "X-DMAAP-DR-ON-BEHALF-OF: test" http://dmaap-dr-prov:8080/
curl -vk -H "X-DMAAP-DR-ON-BEHALF-OF: test" https://dmaap-dr-prov:8443/
curl -v http://dmaap-bc:8080/webapi/feeds
curl -vk https://dmaap-bc:8443/webapi/feeds
For all of the above commands, you should see an HTTP response from the server with a status of 200. The exact contents of the response aren't important for the purposes of this test. A response with a status of 502 or 504 indicates a problem with the proxy configuration (check the IP addresses of the cluster nodes) or with network connectivity from the remote site to the central site.
curl -v http://message-router:3904/events/xyz
curl -kv https://message-router:3905/events/xyz
curl -v http://dmaap-dr-node:8080/
curl -kv https://dmaap-dr-node:8443/
These commands should result in an HTTP response from the server with a status code of 404. The body of the response will have an error message. This error is expected. The fact that this type of error is returned indicates that there is connectivity to the central site DMaaP message router servers. A response with a status of 502 or 504 indicates a problem with the proxy configuration or with network connectivity.
Check the non-HTTP/HTTPS services by attempting to establish TCP connections to them using the nc
command. Assuming the deployment used the default list of services from values.yaml
, there is only one such service to check, log-ls
, the logstash service. Use the following command:
nc -v log-ls 5044
The command should give a response like this:
log-ls.onap.svc.cluster.local [
private_ip_address] 5044 (?) open
where private_ip_address is the local cluster IP address assigned to the log-ls service.
The command will appear to hang, but it is just waiting for some TCP traffic to be passed across the connection. To exit the command, type control-C.
Error responses may indicate problems with the proxy configuration or with network connectivity from the remote site to the central site.