ipsec: perf improvement of ipsec4_input_node using flow cache

Adding flow cache support to improve inbound IPv4/IPSec Security Policy
Database (SPD) lookup performance. By enabling the flow cache in startup
conf, this replaces a linear O(N) SPD search, with an O(1) hash table
search.

This patch is the ipsec4_input_node counterpart to
https://gerrit.fd.io/r/c/vpp/+/31694, and shares much of the same code,
theory and mechanism of action.

Details about the flow cache:
  Mechanism:
  1. First packet of a flow will undergo linear search in SPD
     table. Once a policy match is found, a new entry will be added
     into the flow cache. From 2nd packet onwards, the policy lookup
     will happen in flow cache.
  2. The flow cache is implemented using a hash table without collision
     handling. This will avoid the logic to age out or recycle the old
     flows in flow cache. Whenever a collision occurs, the old entry
     will be overwritten by the new entry. Worst case is when all the
     256 packets in a batch result in collision, falling back to linear
     search. Average and best case will be O(1).
  3. The size of flow cache is fixed and decided based on the number
     of flows to be supported. The default is set to 1 million flows,
     but is configurable by a startup.conf option.
  4. Whenever a SPD rule is added/deleted by the control plane, all
     current flow cache entries will be invalidated. As the SPD API is
     not mp-safe, the data plane will wait for the control plane
     operation to complete.
     Cache invalidation is via an epoch counter that is incremented on
     policy add/del and stored with each entry in the flow cache. If the
     epoch counter in the flow cache does not match the current count,
     the entry is considered stale, and we fall back to linear search.

  The following configurable options are available through startup
  conf under the ipsec{} entry:
  1. ipv4-inbound-spd-flow-cache on/off - enable SPD flow cache
     (default off)
  2. ipv4-inbound-spd-hash-buckets %d - set number of hash buckets
     (default 4,194,304: ~1 million flows with 25% load factor)

  Performance with 1 core, 1 ESP Tunnel, null-decrypt then bypass,
  94B (null encrypted packet) for different SPD policy matching indices:

  SPD Policy index    : 2          10         100        1000
  Throughput          : Mbps/Mbps  Mbps/Mbps  Mbps/Mbps  Mbps/Mbps
  (Baseline/Optimized)
  ARM TX2             : 300/290    230/290    70/290     8.5/290

Type: improvement
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Signed-off-by: mgovind <govindarajan.Mohandoss@arm.com>
Tested-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I8be2ad4715accbb335c38cd933904119db75827b
diff --git a/src/vnet/ipsec/ipsec_spd.h b/src/vnet/ipsec/ipsec_spd.h
index 5bfc6ae..757a1b7 100644
--- a/src/vnet/ipsec/ipsec_spd.h
+++ b/src/vnet/ipsec/ipsec_spd.h
@@ -64,7 +64,8 @@
 
 extern u8 *format_ipsec_spd (u8 * s, va_list * args);
 
-extern u8 *format_ipsec_spd_flow_cache (u8 *s, va_list *args);
+extern u8 *format_ipsec_out_spd_flow_cache (u8 *s, va_list *args);
+extern u8 *format_ipsec_in_spd_flow_cache (u8 *s, va_list *args);
 
 #endif /* __IPSEC_SPD_H__ */