blob: a3018a195ae2a5b677c81fc46b886cc0fd0d6630 [file] [log] [blame]
Chris Donleyee57c722018-06-04 15:29:55 -07001ONAP Blueprint Enrichment
Rich Bennett52b5c092018-09-01 09:48:13 -04002-------------------------
Chris Donleyee57c722018-06-04 15:29:55 -07003
4The ONAP Beijing release includes four functional enhancements in the
5areas of manually triggered scaling, change management, and hardware
6platform awareness (HPA). These features required significant community
7collaboration as they impact multiple ONAP projects. These features are
8applicable to any use case; however, to showcase them in a concrete
9manner, they have been incorporated into VoLTE and vCPE blueprints.
10
11Manually Triggered Scaling
Rich Bennett52b5c092018-09-01 09:48:13 -040012~~~~~~~~~~~~~~~~~~~~~~~~~~
Chris Donleyee57c722018-06-04 15:29:55 -070013
14Scale-out and scale-in are two primary benefits of NFV. Scaling can be
15triggered manually (e.g., by a user or OSS/BSS) or automatically via a
16policy-driven closed loop. An automatic trigger allows real-time action
17without human intervention, reducing costs and improving customer
18experience. A manual trigger, on the other hand, is useful to schedule
19capacity in anticipation of events such as holiday shopping. An ideal
20scaling operation can scale granularly at a virtual function level (VF),
21automate VF configuration tasks and manage the load-balancer that may be
22in front of the VF instances. In addition to run-time, this capability
23also affects service design, as VNF descriptors need to be granular up
24to the VF level.
25
26The Beijing release provides the initial support for these capabilities.
27The community has implemented manually triggered scale-out and scale-in
28in combination with a specific VNF manager (sVNFM) and demonstrated this
29with the VoLTE blueprint. An operator uses the Usecase UI (UUI) project
Rich Bennett52b5c092018-09-01 09:48:13 -040030to trigger a scaleing operation. UUI communicates with the Service
Chris Donleyee57c722018-06-04 15:29:55 -070031Orchestrator (SO). SO uses the VF-C controller, which in turn instructs
32a vendor-provided sVNFM to implement the scale-out action.
33
34We have also demonstrated a manual process to Scale Out VNFs that use
35the Virtual Infrastructure Deployment (VID), the Service Orchestrator
36(SO) and the Application Controller (APPC) as a generic VNF Manager.
37Currently, the process is for the operator to trigger the Scale Out
38action using VID, which will request SO to spin up a new component of
39the VNF. Then SO is building the ConfigScaleOut request and sending to
40APPC over DMaaP, where APPC picks it up and executes the configuration
41scale out action on the requested VNF.
42
43Change Management
Rich Bennett52b5c092018-09-01 09:48:13 -040044~~~~~~~~~~~~~~~~~
Chris Donleyee57c722018-06-04 15:29:55 -070045
46NFV will bring with it an era of continuous, incremental changes instead
47of periodic step-function software upgrades, in addition to a constant
48stream of both PNF and VNF updates and configuration changes. To
49automatically deliver these to existing network services, the ONAP
50community is creating framework to implement change management
51functionality that is independent of any particular network service or
52use case. Ideally, change management provides a consistent interface and
53mechanisms to manage complex dependencies, different upgrade mechanisms
54(in-place vs. scale-out and replace), A/B testing, conflict checking,
55pre- and post-change testing, change scheduling, rollbacks, and traffic
56draining, redirection and load-balancing. These capabilities impact both
57design-time and run-time environments.
58
59Over the next several releases, the community will enhance change
60management capabilities in ONAP, culminating with a full CI/CD flow.
61These capabilities can be applied to any use case; however, specifically
62for the Beijing release, the vCPE blueprint has been enriched to execute
63a predefined workflow to upgrade the virtual gateway VNF by using
64Ansible. An operator invokes an upgrade operation through the VID
65interface. VID drives SO, which initiates a sequence of steps such as
66VNF lock, pre-check, software upgrade, post-check and unlock. Since
67virtual gateway is an L3 VNF, the specific operations are carried out by
68the SDN-C controller in terms of running the pre-check, post-check and
69upgrade through Ansible playbooks.
70
71Hardware Platform Awareness (HPA)
Rich Bennett52b5c092018-09-01 09:48:13 -040072~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Chris Donleyee57c722018-06-04 15:29:55 -070073
74Many VNFs have specific hardware requirements to achieve their
75performance and security goals. These hardware requirements may range
76from basic requirements such as number of cores, memory size, and
77ephemeral disk size to advanced requirements such as CPU policy (e.g.
78dedicate, shared), NUMA, hugepages (size and number), accelerated
79vSwitch (e.g DPDK), crypto/compression acceleration, SRIOV-NIC, TPM, SGX
80and so on. The Beijing release provides three HPA-related capabilities:
81
821. Specification of the VNF hardware platform requirements as a set of
83 policies.
84
852. Discovery of hardware and other platform features supported by cloud
86 regions.
87
883. Selection of the right cloud region and NFV infrastructure flavor by
89 matching VNF HPA requirements with the discovered platform
90 capabilities.
91
92While this functionality is independent of any particular use case, in
93the Beijing release, the vCPE use case has been enriched with HPA. An
94operator can specify engineering rules for performance sensitive VNFs
95through a set of policies. During run-time, SO relies on the ONAP
96Optimization Framework (OOF) to enforce these policies via a
97placement/scheduling decision. OOF determines the right compute node
98flavors for the VNF by querying the above-defined policies. Once a
99homing decision is conveyed to SO, SO executes the appropriate workflow
100via the appropriate controller.