commit | 85ecc810ca98550a250c74f32244760e459e3f87 | [log] [tgz] |
---|---|---|
author | Jasvinder Singh <jasvinder.singh@intel.com> | Thu Jul 21 17:02:19 2016 +0100 |
committer | Damjan Marion <dmarion.lists@gmail.com> | Wed Sep 28 16:37:28 2016 +0000 |
tree | 5c26a0c4f7f382a89e747696c3bd0130f2ebb81f | |
parent | ac8146caf1f474f2c440f2316bbcc2d41245ff35 [diff] |
DPDK HQoS: Enable Hierarchical Scheduler in VPP This commit extends the vpp framework with new thread type "hqos-threads" that runs the Hierarchical Quality of Service (HQoS) scheduler associted with output interface. HQoS Scheduler prioritize the packets from different users and ensures sufficient bandwidth to pass the more important traffic. At high level, HQoS scheduler is a buffer that can temporarily store a large number of packets. In otherwords, it is a collection of large number of queues organized into hierarchy of 5 levels; the port (i.e. the physical interface) is at the root of the hierarchy followed by the subport (a set of users), the pipes (individual users), the traffic classes (each with a strict priority) and at the leaves, the queues. In each HQoS scheduler, three operations are performed; classification (setting HQoS port, subport, pipe, traffic class and queue within traffic class from packet fields), enqueue (selecting HQoS queue for the packet, and to drop the packet if the queue is full) and dequeue (schedule the packet based on its length and available credits, and handover the scheduled packet to the output interface). In vpp, the number of hqos threads will be equal to cpu cores specified in corelist-hqos-threads parameter cpu section of the vpp configuration file. One hqos thread can run HQoS for multiple output interfaces. A particular HQoS instance is initialised with default parameters required to configure hqos port, subport, pipe and queues. Some of them can be re-configured in run-time through CLI commands as well binary APIs. Following illustrates the sample startup configuration file with 4x worker threads feeding 2x hqos threads that handle each HQoS for 1x output interface. For more details on HQoS configuration please refer to DPDK Programmer's Guide. dpdk { socket-mem 16384,16384 dev 0000:02:00.0 { num-rx-queues 2 hqos } dev 0000:06:00.0 { num-rx-queues 2 hqos } num-mbufs 1000000 } cpu { main-core 0 corelist-workers 1, 2, 3, 4 corelist-hqos-threads 5, 6 } Change-Id: I635c3395a7c4ddf0a239ef77b0b0a31a6dfc4767 Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
The VPP platform is an extensible framework that provides out-of-the-box production quality switch/router functionality. It is the open source version of Cisco's Vector Packet Processing (VPP) technology: a high performance, packet-processing stack that can run on commodity CPUs.
The benefits of this implementation of VPP are its high performance, proven technology, its modularity and flexibility, and rich feature set.
For more information on VPP and its features please visit the FD.io website and What is VPP? pages.
Details of the changes leading up to this version of VPP can be found under @ref release_notes.
Directory name | Description |
---|---|
build-data | Build metadata |
build-root | Build output directory |
doxygen | Documentation generator configuration |
dpdk | DPDK patches and build infrastructure |
g2 | Event log visualization tool |
perftool | Performance tool |
@ref plugins | VPP bundled plugins directory |
@ref svm | Shared virtual memory allocation library |
test | Unit tests |
@ref vlib | VPP application library source |
@ref vlib-api | VPP API library source |
@ref vnet | VPP networking source |
@ref vpp | VPP application source |
@ref vpp-api | VPP application API source |
vppapigen | VPP API generator source |
vpp-api-test | VPP API test program source |
@ref vppinfra | VPP core library source |
(If the page you are viewing is not generated by Doxygen then ignore any @@ref labels in the above table.)
In general anyone interested in building, developing or running VPP should consult the VPP wiki for more complete documentation.
In particular, readers are recommended to take a look at [Pulling, Building, Running, Hacking, Pushing](https://wiki.fd.io/view/VPP/Pulling,_Building,_Run ning,_Hacking_and_Pushing_VPP_Code) which provides extensive step-by-step coverage of the topic.
For the impatient, some salient information is distilled below.
To install system dependencies, build VPP and then install it, simply run the build script. This should be performed a non-privileged user with sudo
access from the project base directory:
./build-root/vagrant/build.sh
If you want a more fine-grained approach because you intend to do some development work, the Makefile
in the root directory of the source tree provides several convenience shortcuts as make
targets that may be of interest. To see the available targets run:
make
The directory build-root/vagrant
contains a VagrantFile
and supporting scripts to bootstrap a working VPP inside a Vagrant-managed Virtual Machine. This VM can then be used to test concepts with VPP or as a development platform to extend VPP. Some obvious caveats apply when using a VM for VPP since its performance will never match that of bare metal; if your work is timing or performance sensitive, consider using bare metal in addition or instead of the VM.
For this to work you will need a working installation of Vagrant. Instructions for this can be found [on the Setting up Vagrant wiki page] (https://wiki.fd.io/view/DEV/Setting_Up_Vagrant).
Several modules provide documentation, see @subpage user_doc for more information.
Visit the VPP wiki for details on more advanced building strategies and development notes.