commit | 67f935ec6eb9ec37b7d73029c5afa89cbf4a9aa2 | [log] [tgz] |
---|---|---|
author | Steven Luong <sluong@cisco.com> | Fri Feb 01 10:23:56 2019 -0800 |
committer | Damjan Marion <dmarion@me.com> | Thu Feb 21 22:03:40 2019 +0000 |
tree | f32c614f93d27697115ffd412eeb1c28819ee7ea | |
parent | 3be3cd60bc41d095174ea413aa753a7bd9eff73e [diff] |
vhost: VPP stalls with vhost performing control plane actions Symptom ------- With NDR traffic blasting at VPP, bringing up a new VM with vhost connection to VPP causes packet drops. I am able to recreate this problem easily using a simple setup like this. TREX-------------- switch ---- VPP |---------------| |-------| Cause ----- The reason for the packet drops is due to vhost holding onto the worker barrier lock for too long in vhost_user_socket_read(). There are quite a few of system calls inside the routine. At the end of the routine, it unconditionally calls vhost_user_update_iface_state() for all message types. vhost_user_update_iface_state() also unconditionally calls vhost_user_rx_thread_placement() and vhost_user_tx_thread_placement(). vhost_user_rx_thread_placement scraps out all existing cpu/queue mappings for the interface and creates brand new cpu/queue mappings for the interface. This process is very disruptive and very expensive. In my opinion, this area of code needs a makeover. Fixes ----- * vhost_user_socket_read() is rewritten that it should not hold onto the worker barrier lock for system calls, or at least minimize the need for doing it. * Remove the call to vhost_user_update_iface_state as a default route at the end of vhost_user_socket_read(). There is only a couple of message types which really need to call vhost_user_update_iface_state(). We put the call to those message types which need it. * Remove vhost_user_rx_thread_placement() and vhost_user_tx_thread_placement from vhost_user_update_iface_state(). There is no need to repetatively change the cpu/queue mappings. * vhost_user_rx_thread_placement() is actually quite expensive. It should be called only once per queue for the interface. There is no need to scrap the existing cpu/queue mappings and create new cpu/queue mappings when the additional queues becomes active/enable. * Change to create the cpu/queue mappings for the first RX when the interface is created. Dont remove the cpu/queue mapping when the interface is disconnected. Remove the cpu/queue mapping only when the interface is deleted. The create vhost user interface CLI also has some very expensive system calls if the command is entered with the optional keyword "server" As a bonus, This patch makes the create vhost user interface binary-api and CLI thread safe. Do the protection for the small amount of code which is thread unsafe. Change-Id: I4a19cbf7e9cc37ea01286169882e5603e6d7eb77 Signed-off-by: Steven Luong <sluong@cisco.com>
The VPP platform is an extensible framework that provides out-of-the-box production quality switch/router functionality. It is the open source version of Cisco's Vector Packet Processing (VPP) technology: a high performance, packet-processing stack that can run on commodity CPUs.
The benefits of this implementation of VPP are its high performance, proven technology, its modularity and flexibility, and rich feature set.
For more information on VPP and its features please visit the FD.io website and What is VPP? pages.
Details of the changes leading up to this version of VPP can be found under @ref release_notes.
Directory name | Description |
---|---|
build-data | Build metadata |
build-root | Build output directory |
doxygen | Documentation generator configuration |
dpdk | DPDK patches and build infrastructure |
@ref extras/libmemif | Client library for memif |
@ref src/examples | VPP example code |
@ref src/plugins | VPP bundled plugins directory |
@ref src/svm | Shared virtual memory allocation library |
src/tests | Standalone tests (not part of test harness) |
src/vat | VPP API test program |
@ref src/vlib | VPP application library |
@ref src/vlibapi | VPP API library |
@ref src/vlibmemory | VPP Memory management |
@ref src/vnet | VPP networking |
@ref src/vpp | VPP application |
@ref src/vpp-api | VPP application API bindings |
@ref src/vppinfra | VPP core library |
@ref src/vpp/api | Not-yet-relocated API bindings |
test | Unit tests and Python test harness |
In general anyone interested in building, developing or running VPP should consult the VPP wiki for more complete documentation.
In particular, readers are recommended to take a look at [Pulling, Building, Running, Hacking, Pushing](https://wiki.fd.io/view/VPP/Pulling,_Building,_Run ning,_Hacking_and_Pushing_VPP_Code) which provides extensive step-by-step coverage of the topic.
For the impatient, some salient information is distilled below.
To install system dependencies, build VPP and then install it, simply run the build script. This should be performed a non-privileged user with sudo
access from the project base directory:
./extras/vagrant/build.sh
If you want a more fine-grained approach because you intend to do some development work, the Makefile
in the root directory of the source tree provides several convenience shortcuts as make
targets that may be of interest. To see the available targets run:
make
The directory extras/vagrant
contains a VagrantFile
and supporting scripts to bootstrap a working VPP inside a Vagrant-managed Virtual Machine. This VM can then be used to test concepts with VPP or as a development platform to extend VPP. Some obvious caveats apply when using a VM for VPP since its performance will never match that of bare metal; if your work is timing or performance sensitive, consider using bare metal in addition or instead of the VM.
For this to work you will need a working installation of Vagrant. Instructions for this can be found [on the Setting up Vagrant wiki page] (https://wiki.fd.io/view/DEV/Setting_Up_Vagrant).
Several modules provide documentation, see @subpage user_doc for more end-user-oriented information. Also see @subpage dev_doc for developer notes.
Visit the VPP wiki for details on more advanced building strategies and other development notes.
There is PyDoc generated documentation available for the VPP test framework. See @ref test_framework_doc for details.