vlib: introduce DMA infrastructure

This patch introduces DMA infrastructure into vlib. This is well known
that large amount of memory movements will drain core resource. Nowadays
more and more hardware accelerators were designed out for freeing core
from this burden. Meanwhile some restrictions still remained when
utilizing hardware accelerators, e.g. cross numa throughput will have a
significant drop compared to same node. Normally the number of hardware
accelerator instances will less than cores number, not to mention that
applications number will even beyond the number of cores. Some hardware
may support share virtual address with cores, while others are not.

Here we introduce new DMA infrastructure which can fulfill the
requirements of vpp applications like session and memif and in the
meantime dealing with hardware limitations.

Here is some design backgrounds:

  Backend is the abstract of resource which allocated from DMA device
  and can do some basic operations like configuration, DMA copy and
  result query.

  Config is the abstract of application DMA requirement. Application
  need to request an unique config index from DMA infrastructure. This
  unique config index is associated with backend resource. Two options
  cpu fallback and barrier before last can be specified in config.
  DMA transfer will be performed by CPU when backend is busy if cpu
  fallback option is enabled. DMA transfer callback will be in order
  if barrier before last option is enabled.

  We constructs all the stuffs that DMA transfer request needed into
  DMA batch. It contains the pattern of DMA descriptors and function
  pointers for submission and callback. One DMA transfer request need
  multiple times batch update and one time batch submission.

  DMA backends will assigned to config's workers threads equally. Lock
  will be used for thread-safety if same backends assigned to multiple
  threads. Backend node will check all the pending requests in worker
  thread and do callback with the pointer of DMA batch if transfer
  completed. Application can utilize cookie in DMA batch for selves
  usage.

DMA architecture:

   +----------+   +----------+           +----------+   +----------+
   | Config1  |   | Config2  |           | Config1  |   | Config2  |
   +----------+   +----------+           +----------+   +----------+
        ||             ||                     ||             ||
   +-------------------------+           +-------------------------+
   |  DMA polling thread A   |           |  DMA polling thread B   |
   +-------------------------+           +-------------------------+
               ||                                     ||
           +----------+                          +----------+
           | Backend1 |                          | Backend2 |
           +----------+                          +----------+

Type: feature

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Change-Id: I1725e0c26687985aac29618c9abe4f5e0de08ebf
14 files changed
tree: a464eb14b5e04b19042e92bb83ca7b8567731f19
  1. .github/
  2. build/
  3. build-data/
  4. build-root/
  5. docs/
  6. extras/
  7. src/
  8. test/
  9. .clang-format
  10. .clang-tidy
  11. .git_commit_template.txt
  12. .gitignore
  13. .gitreview
  14. configure
  15. INFO.yaml
  16. LICENSE
  17. MAINTAINERS
  18. Makefile
  19. README.md
README.md

Vector Packet Processing

Introduction

The VPP platform is an extensible framework that provides out-of-the-box production quality switch/router functionality. It is the open source version of Cisco's Vector Packet Processing (VPP) technology: a high performance, packet-processing stack that can run on commodity CPUs.

The benefits of this implementation of VPP are its high performance, proven technology, its modularity and flexibility, and rich feature set.

For more information on VPP and its features please visit the FD.io website and What is VPP? pages.

Changes

Details of the changes leading up to this version of VPP can be found under doc/releasenotes.

Directory layout

Directory nameDescription
build-dataBuild metadata
build-rootBuild output directory
docsSphinx Documentation
dpdkDPDK patches and build infrastructure
extras/libmemifClient library for memif
src/examplesVPP example code
src/pluginsVPP bundled plugins directory
src/svmShared virtual memory allocation library
src/testsStandalone tests (not part of test harness)
src/vatVPP API test program
src/vlibVPP application library
src/vlibapiVPP API library
src/vlibmemoryVPP Memory management
src/vnetVPP networking
src/vppVPP application
src/vpp-apiVPP application API bindings
src/vppinfraVPP core library
src/vpp/apiNot-yet-relocated API bindings
testUnit tests and Python test harness

Getting started

In general anyone interested in building, developing or running VPP should consult the VPP wiki for more complete documentation.

In particular, readers are recommended to take a look at [Pulling, Building, Running, Hacking, Pushing](https://wiki.fd.io/view/VPP/Pulling,_Building,_Run ning,_Hacking_and_Pushing_VPP_Code) which provides extensive step-by-step coverage of the topic.

For the impatient, some salient information is distilled below.

Quick-start: On an existing Linux host

To install system dependencies, build VPP and then install it, simply run the build script. This should be performed a non-privileged user with sudo access from the project base directory:

./extras/vagrant/build.sh

If you want a more fine-grained approach because you intend to do some development work, the Makefile in the root directory of the source tree provides several convenience shortcuts as make targets that may be of interest. To see the available targets run:

make

Quick-start: Vagrant

The directory extras/vagrant contains a VagrantFile and supporting scripts to bootstrap a working VPP inside a Vagrant-managed Virtual Machine. This VM can then be used to test concepts with VPP or as a development platform to extend VPP. Some obvious caveats apply when using a VM for VPP since its performance will never match that of bare metal; if your work is timing or performance sensitive, consider using bare metal in addition or instead of the VM.

For this to work you will need a working installation of Vagrant. Instructions for this can be found [on the Setting up Vagrant wiki page] (https://wiki.fd.io/view/DEV/Setting_Up_Vagrant).

More information

Several modules provide documentation, see @subpage user_doc for more end-user-oriented information. Also see @subpage dev_doc for developer notes.

Visit the VPP wiki for details on more advanced building strategies and other development notes.