DPDK HQoS: Enable Hierarchical Scheduler in VPP

This commit extends the vpp framework with new thread type "hqos-threads" that
runs the Hierarchical Quality of Service (HQoS) scheduler associted with output
interface.  HQoS Scheduler prioritize the packets from different users and
ensures sufficient bandwidth to pass the more important traffic.

At high level, HQoS scheduler is a buffer that can temporarily store a
large number of packets. In otherwords, it is a collection of large number
of queues organized into hierarchy of 5 levels; the port (i.e. the physical
interface) is at the root of the hierarchy followed by the subport (a set
of users), the pipes (individual users), the traffic classes (each with a
strict priority) and at the leaves, the queues.

In each HQoS scheduler, three operations are performed; classification
(setting HQoS port, subport, pipe, traffic class and queue within traffic
class from packet fields), enqueue (selecting HQoS queue for the packet,
and to drop the packet if the queue is full) and dequeue (schedule the
packet based on its length and available credits, and handover the scheduled
packet to the output interface).

In vpp, the number of hqos threads will be equal to cpu cores specified in
corelist-hqos-threads parameter cpu section of the vpp configuration file.
One hqos thread can run HQoS for multiple output interfaces. A particular HQoS
instance is initialised with default parameters required to configure hqos port,
subport, pipe and queues. Some of them can be re-configured in run-time
through CLI commands as well binary APIs.

Following illustrates the sample startup configuration file with 4x worker
threads feeding 2x hqos threads that handle each HQoS for 1x output interface.
For more details on HQoS configuration please refer to DPDK Programmer's Guide.

dpdk {
	socket-mem 16384,16384

	dev 0000:02:00.0 {
		num-rx-queues 2
		hqos
	}
	dev 0000:06:00.0 {
		num-rx-queues 2
		hqos
	}

	num-mbufs 1000000
}

cpu {
  main-core 0
  corelist-workers  1, 2, 3, 4
  corelist-hqos-threads  5, 6
}

Change-Id: I635c3395a7c4ddf0a239ef77b0b0a31a6dfc4767
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
12 files changed
tree: 5c26a0c4f7f382a89e747696c3bd0130f2ebb81f
  1. build-data/
  2. build-root/
  3. doxygen/
  4. dpdk/
  5. g2/
  6. gmod/
  7. perftool/
  8. plugins/
  9. svm/
  10. test/
  11. vlib/
  12. vlib-api/
  13. vnet/
  14. vpp/
  15. vpp-api/
  16. vpp-api-test/
  17. vppapigen/
  18. vppinfra/
  19. .gitignore
  20. .gitreview
  21. LICENSE
  22. Makefile
  23. README.md
  24. RELEASE.md
README.md

Vector Packet Processing

Introduction

The VPP platform is an extensible framework that provides out-of-the-box production quality switch/router functionality. It is the open source version of Cisco's Vector Packet Processing (VPP) technology: a high performance, packet-processing stack that can run on commodity CPUs.

The benefits of this implementation of VPP are its high performance, proven technology, its modularity and flexibility, and rich feature set.

For more information on VPP and its features please visit the FD.io website and What is VPP? pages.

Changes

Details of the changes leading up to this version of VPP can be found under @ref release_notes.

Directory layout

Directory nameDescription
build-dataBuild metadata
build-rootBuild output directory
doxygenDocumentation generator configuration
dpdkDPDK patches and build infrastructure
g2Event log visualization tool
perftoolPerformance tool
@ref pluginsVPP bundled plugins directory
@ref svmShared virtual memory allocation library
testUnit tests
@ref vlibVPP application library source
@ref vlib-apiVPP API library source
@ref vnetVPP networking source
@ref vppVPP application source
@ref vpp-apiVPP application API source
vppapigenVPP API generator source
vpp-api-testVPP API test program source
@ref vppinfraVPP core library source

(If the page you are viewing is not generated by Doxygen then ignore any @@ref labels in the above table.)

Getting started

In general anyone interested in building, developing or running VPP should consult the VPP wiki for more complete documentation.

In particular, readers are recommended to take a look at [Pulling, Building, Running, Hacking, Pushing](https://wiki.fd.io/view/VPP/Pulling,_Building,_Run ning,_Hacking_and_Pushing_VPP_Code) which provides extensive step-by-step coverage of the topic.

For the impatient, some salient information is distilled below.

Quick-start: On an existing Linux host

To install system dependencies, build VPP and then install it, simply run the build script. This should be performed a non-privileged user with sudo access from the project base directory:

./build-root/vagrant/build.sh

If you want a more fine-grained approach because you intend to do some development work, the Makefile in the root directory of the source tree provides several convenience shortcuts as make targets that may be of interest. To see the available targets run:

make

Quick-start: Vagrant

The directory build-root/vagrant contains a VagrantFile and supporting scripts to bootstrap a working VPP inside a Vagrant-managed Virtual Machine. This VM can then be used to test concepts with VPP or as a development platform to extend VPP. Some obvious caveats apply when using a VM for VPP since its performance will never match that of bare metal; if your work is timing or performance sensitive, consider using bare metal in addition or instead of the VM.

For this to work you will need a working installation of Vagrant. Instructions for this can be found [on the Setting up Vagrant wiki page] (https://wiki.fd.io/view/DEV/Setting_Up_Vagrant).

More information

Several modules provide documentation, see @subpage user_doc for more information.

Visit the VPP wiki for details on more advanced building strategies and development notes.