You have generated an MD-SAL module.
mvn clean install
on this project.mvn clean install
if you haven't already. This will generate some code from the yang models.The purpose of this ODL feature is to support geo-redundancy through a series of ODL integrated health checks and tools.
On initialization gr-toolkit expects to find a file named gr-toolkit.properties
located in the SDNC_CONFIG
directory. The properties file should contain:
akka.conf.location
adm.useSsl
adm.fqdn
adm.healthcheck
adm.port.http
adm.port.ssl
controller.credentials
controller.useSsl
controller.port.http
controller.port.ssl
controller.port.akka
mbean.cluster
mbean.shardManager
mbean.shard.config
/jolokia/read/org.opendaylight.controller:Category=Shards,name=%s,type=DistributedConfigDatastore
. GR Toolkit will use this template with information pulled from the Akka ShardManager MBean.site.identifier
Returns a unique site identifier of the site the ODL resides on.
Input: None
Output
{ "output": { "id": "UNIQUE_IDENTIFIER_HERE", "status": "200" } }
Returns HEALTHY/FAULTY based on whether or not a 200 response is received from the Admin Portal's health check page.
Input: None
Output
{ "output": { "status": "200", "health": "HEALTHY" } }
Returns HEALTHY/FAULTY based on if DbLib can obtain a writeable connection from its pool.
Input: None
Output
{ "output": { "status": "200", "health": "HEALTHY" } }
Uses Jolokia queries to determine shard health and voting status. In a 3 ODL node configuration, 2 FAULTY nodes constitutes a FAULTY site. In a 6 node configuration it is assumed that there are 2 sites consiting of 3 nodes each.
Input: None
Output
{ "output": { "site1-health": "HEALTHY", "members": [ { "address": "member-3.node", "role": "member-3", "unreachable": false, "voting": true, "up": true, "replicas": [ { "shard": "member-3-shard-default-config" } ] }, { "address": "member-1.node", "role": "member-1", "unreachable": false, "voting": true, "up": true, "replicas": [ { "shard": "member-1-shard-default-config" } ] }, { "address": "member-5.node", "role": "member-5", "unreachable": false, "voting": false, "up": true, "replicas": [ { "shard": "member-5-shard-default-config" } ] }, { "address": "member-2.node", "role": "member-2", "unreachable": false, "leader": [ { "shard": "member-2-shard-default-config" } ], "commit-status": [ { "shard": "member-5-shard-default-config", "delta": 148727 }, { "shard": "member-4-shard-default-config", "delta": 148869 } ], "voting": true, "up": true, "replicas": [ { "shard": "member-2-shard-default-config" } ] }, { "address": "member-4.node", "role": "member-4", "unreachable": false, "voting": false, "up": true, "replicas": [ { "shard": "member-4-shard-default-config" } ] }, { "address": "member-6.node", "role": "member-6", "unreachable": false, "voting": false, "up": true, "replicas": [ { "shard": "member-6-shard-default-config" } ] } ], "status": "200", "site2-health": "HEALTHY" } }
Aggregates data from Admin Health, Database Health, and Cluster Health and returns a simplified payload containing the health of a site. A FAULTY Admin Portal or Database health status will constitute a FAULTY site; in a 3 ODL node configuration, 2 FAULTY nodes constitutes a FAULTY site. If any portion of the health check registers as FAULTY, the entire site will be designated as FAULTY. In a 6 node configuration these health checks are performed cross site as well.
Input: None
Output
{ "output": { "sites": [ { "id": "SITE_1", "role": "ACTIVE", "health": "HEALTHY" }, { "id": "SITE_2", "role": "STANDBY", "health": "FAULTY" } ], "status": "200" } }
Places rules in IP Tables to block Akka traffic to/from a specific node on a specified port.
Input:
{ "input": { "node-info": [ { "node": "your.odl.node", "port": "2550" } ] } }Output
{ "output": { "status": "200" } }
Removes rules in IP Tables to allow Akka traffic to/from a specifc node on a specified port.
Input:
{ "input": { "node-info": [ { "node": "your.odl.node", "port": "2550" } ] } }Output
{ "output": { "status": "200" } }
Only usable in a 6 ODL node configuration. Determines which site is active/standby, switches voting to the standby site, and isolates the old active site. If backupData=true an MD-SAL export will be scheduled and backed up to a Nexus server (requires ccsdk.sli.northbound.daexim-offsite-backup feature).
Input:
{ "input": { "backupData": "true" } }Output
{ "output": { "status": "200", "message": "Failover complete." } }