commit | bc0d9ff6727d77668e216aba1c6d6cb753fa2ac3 | [log] [tgz] |
---|---|---|
author | Steven Luong <sluong@cisco.com> | Mon Mar 23 09:34:59 2020 -0700 |
committer | Steven Luong <sluong@cisco.com> | Mon Apr 27 09:25:32 2020 -0700 |
tree | dc2af469cf255d3d819dd52f1cc6703d708f9728 | |
parent | ba6deb96e923f71aa9387c06000412c3fb1362fa [diff] |
virtio: support virtio 1.1 packed ring in vhost virtio 1.1 defines a number of new features. Packed ring is among the most notable and important one. It combines used, available, and descripptor rings into one. This patch provides experimental support for packed ring. To avoid regression, when packed ring is configured for the interface, it is branched to a separate RX and TX driver. Non packed ring should continue to perform as it was before. Packed ring is tested using qemu4.2 and ubuntu focal fossa (kernel 5.4.0-12) on the guess VM which supports packed ring. To configure VPP with packed ring, just add the optional keyword "packed" when creating the vhost interface. To bring up the guest VM with packed ring, add "packed=on" in the qemu launch command. To facilitate troubleshooting, also added "verbose" option in show vhost desc CLI to include displaying the indirect descriptors. Known qemu reconnect issue - If VPP is restarted, guest VMs also need to be restarted. The problem is kernel virtio-net-pci keeps track of the previous available and used indices. For virtio 1.0, these indices are in shared memory and qemu can easily copy them to pass to the backend for reconnect. For virio 1.1, these indices are no longer in shared memory. Qemu needs a new mechanism to retrieve them and it is not currently implemented. So when the protocol reconnects, qemu does not have the correct available and used indices to pass to the backend. As a result, after the reconnect, virtio-net-pci is reading the TX ring from the wrong position in the ring, not the same position which the backend is writing. Similar problem exists also in the RX. Type: feature Signed-off-by: Steven Luong <sluong@cisco.com> Change-Id: I5afc50b0bafab5a1de7a6dd10f399db3fafd144c
The VPP platform is an extensible framework that provides out-of-the-box production quality switch/router functionality. It is the open source version of Cisco's Vector Packet Processing (VPP) technology: a high performance, packet-processing stack that can run on commodity CPUs.
The benefits of this implementation of VPP are its high performance, proven technology, its modularity and flexibility, and rich feature set.
For more information on VPP and its features please visit the FD.io website and What is VPP? pages.
Details of the changes leading up to this version of VPP can be found under @ref release_notes.
Directory name | Description |
---|---|
build-data | Build metadata |
build-root | Build output directory |
doxygen | Documentation generator configuration |
dpdk | DPDK patches and build infrastructure |
@ref extras/libmemif | Client library for memif |
@ref src/examples | VPP example code |
@ref src/plugins | VPP bundled plugins directory |
@ref src/svm | Shared virtual memory allocation library |
src/tests | Standalone tests (not part of test harness) |
src/vat | VPP API test program |
@ref src/vlib | VPP application library |
@ref src/vlibapi | VPP API library |
@ref src/vlibmemory | VPP Memory management |
@ref src/vnet | VPP networking |
@ref src/vpp | VPP application |
@ref src/vpp-api | VPP application API bindings |
@ref src/vppinfra | VPP core library |
@ref src/vpp/api | Not-yet-relocated API bindings |
test | Unit tests and Python test harness |
In general anyone interested in building, developing or running VPP should consult the VPP wiki for more complete documentation.
In particular, readers are recommended to take a look at [Pulling, Building, Running, Hacking, Pushing](https://wiki.fd.io/view/VPP/Pulling,_Building,_Run ning,_Hacking_and_Pushing_VPP_Code) which provides extensive step-by-step coverage of the topic.
For the impatient, some salient information is distilled below.
To install system dependencies, build VPP and then install it, simply run the build script. This should be performed a non-privileged user with sudo
access from the project base directory:
./extras/vagrant/build.sh
If you want a more fine-grained approach because you intend to do some development work, the Makefile
in the root directory of the source tree provides several convenience shortcuts as make
targets that may be of interest. To see the available targets run:
make
The directory extras/vagrant
contains a VagrantFile
and supporting scripts to bootstrap a working VPP inside a Vagrant-managed Virtual Machine. This VM can then be used to test concepts with VPP or as a development platform to extend VPP. Some obvious caveats apply when using a VM for VPP since its performance will never match that of bare metal; if your work is timing or performance sensitive, consider using bare metal in addition or instead of the VM.
For this to work you will need a working installation of Vagrant. Instructions for this can be found [on the Setting up Vagrant wiki page] (https://wiki.fd.io/view/DEV/Setting_Up_Vagrant).
Several modules provide documentation, see @subpage user_doc for more end-user-oriented information. Also see @subpage dev_doc for developer notes.
Visit the VPP wiki for details on more advanced building strategies and other development notes.
There is PyDoc generated documentation available for the VPP test framework. See @ref test_framework_doc for details.