Neale Ranns | dfd3954 | 2020-11-09 10:09:42 +0000 | [diff] [blame] | 1 | .. _missing: |
| 2 | |
| 3 | Missing Functionality |
| 4 | --------------------- |
| 5 | |
| 6 | A list of functionality that the FIB does not currently provide. |
| 7 | |
| 8 | |
| 9 | PIC Edge Backup Paths |
| 10 | ^^^^^^^^^^^^^^^^^^^^^ |
| 11 | |
| 12 | FIB supports the concept of path 'preference'. Only paths that have |
| 13 | the best preference contribute to forwarding. Only once all the paths with |
| 14 | the best preference go down do the paths with the next best preference |
| 15 | contribute. |
| 16 | |
| 17 | In BGP PIC edge, BGP would install the primary paths and the backup |
| 18 | paths. With expectation that backups are only used once all primaries |
| 19 | fail; this is the same behaviour that FIB's preference sets provide. |
| 20 | |
| 21 | However, in order to get prefix independent convergence, one must be |
| 22 | able to only modify the path-list's load-balance map (LBM) to choose the |
| 23 | paths to use. Hence the paths must already be in the map, and |
| 24 | conversely must be in the fib_entry's load-balance (LB). In other |
| 25 | words, to use backup paths with PIC, the fib_entry's LB must include |
| 26 | the backup paths, and the path-lists LBM must map from the backups to |
| 27 | the primaries. |
| 28 | |
| 29 | This is change that is reasonably easy w.r.t. to knowing what to |
| 30 | change, but hard to get right and hard to test. |
| 31 | |
| 32 | |
| 33 | Loop Free Alternate Paths |
| 34 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 35 | |
| 36 | Contrary to the BGP approach for path backups, an IGP could install a |
| 37 | loop free alternate (LFA) path to achieve fast re-route (FRR). |
| 38 | |
| 39 | Because of the way the LFA paths are calculated by the IGP an LFA backup |
| 40 | path is always paired with a primary. VPP FIB does not support this |
| 41 | primary-backup pair relationship. |
| 42 | |
| 43 | In intent of LFA FRR is/was to get below the magic 50ms mark. To do |
| 44 | this the expectation is/was that one would need in the forwarding |
| 45 | graph an object that represents a path's state. This object would be |
| 46 | checked for each packet being sent. If the path is up, the graph (an |
| 47 | adjacency since it's the IGP) for the primary path is taken, if it's |
| 48 | down the graph for the backup is taken. When a path goes down only |
| 49 | this indirection object needs to be updated to affect all |
| 50 | routes. Naturally, the indirection would incur a performance cost, but |
| 51 | we know that there are many performance-convergence trade-offs in a |
| 52 | FIB design. |
| 53 | |
| 54 | Should VPP's FIB support this feature? It all depends on the |
| 55 | 50ms. LFA FRR comes from the era when routers ran on lower performance |
| 56 | CPUs and interface down was an interrupt. VPP typically has plenty of |
| 57 | gas but runs as a user space process. So, can it update all routes in |
| 58 | under 50ms on a meaty CPU and can the OS deliver the interface down |
| 59 | within the time requirements? I don't have the answers to either |
| 60 | question. |
| 61 | |
| 62 | |
| 63 | Extranets for Multicast |
| 64 | ^^^^^^^^^^^^^^^^^^^^^^^ |
| 65 | |
| 66 | When a unicast prefix is present in two different tables, then it |
| 67 | refers to a different set of devices. When the prefix is imported it |
| 68 | refers to the same set of devices. If the set of paths to reach the |
| 69 | prefix is different in the import and export table, it doesn't matter, |
| 70 | since they both refer to the same devices, so either set can be |
| 71 | used. Therefore, FIB's usual source preference rules can apply. The |
| 72 | 'import' source is lower priority. |
| 73 | |
| 74 | When a multicast prefix is present in two different tables, then it's |
| 75 | two different flows referring to two different set of receivers. When |
| 76 | the prefix is imported, then it refers to the same flow and two |
| 77 | different sets of receivers. In other words, the receiver set in the |
| 78 | import table needs to be the super set of receivers. |
| 79 | |
| 80 | There are two ways one might consider doing this; merging the |
| 81 | path-lists or replicating the packet first into each table. |
| 82 | |
| 83 | |
| 84 | Collapsing |
| 85 | ^^^^^^^^^^ |
| 86 | |
| 87 | Read :ref:`fastconvergence` |
| 88 | |
| 89 | Collapsing the DPO graph for recursive routes doesn't have to be an |
| 90 | all or nothing. Easy cases: |
| 91 | |
| 92 | |
| 93 | - A recursive prefix with only one path and a path-list that is not |
| 94 | popular, could stack directly on the LB of the via entry. |
| 95 | - A recursive prefix with only multiple paths and a path-list that is not |
| 96 | popular, could construct a new load balance using the choices |
| 97 | present in each bucket of its via entries. The choices in the new LB |
| 98 | though would need to reflect the relative weighting. |
| 99 | |
| 100 | |
| 101 | The condition of an non-popular path-list means that the LB doesn't |
| 102 | have an LB map and hence it needs to be updated for convergence to |
| 103 | occur. |
| 104 | |
| 105 | The more difficult cases come when the recursive prefix has labels |
| 106 | which need to be stack on the via entries' choices. |
| 107 | |
| 108 | You might also envision a global configuration that always collapses all |
| 109 | chains, which could be used in deployments where convergence is not a |
| 110 | priority. |