andrew | 7403026 | 2018-08-05 21:18:45 -0400 | [diff] [blame] | 1 | .. _binary_api_support: |
| 2 | |
| 3 | .. toctree:: |
| 4 | |
| 5 | Binary API Support |
| 6 | ================== |
| 7 | |
| 8 | VPP provides a binary API scheme to allow a wide variety of client |
| 9 | codes to program data-plane tables. As of this writing, there are |
| 10 | hundreds of binary APIs. |
| 11 | |
| 12 | Messages are defined in \*.api files. Today, there are about 80 api |
| 13 | files, with more arriving as folks add programmable features. The API |
| 14 | file compiler sources reside in src/tools/vppapigen. |
| 15 | |
| 16 | From `src/vnet/interface.api |
| 17 | <https://docs.fd.io/vpp/18.11/de/d75/interface_8api.html>`_, here's a |
| 18 | typical request/response message definition: |
| 19 | |
| 20 | .. code-block:: console |
| 21 | |
| 22 | autoreply define sw_interface_set_flags |
| 23 | { |
| 24 | u32 client_index; |
| 25 | u32 context; |
| 26 | u32 sw_if_index; |
| 27 | /* 1 = up, 0 = down */ |
| 28 | u8 admin_up_down; |
| 29 | }; |
| 30 | |
| 31 | To a first approximation, the API compiler renders this definition |
| 32 | into |
| 33 | *vpp/build-root/install-vpp_debug-native/vpp/include/vnet/interface.api.h* |
| 34 | as follows: |
| 35 | |
| 36 | .. code-block:: C |
| 37 | |
| 38 | /****** Message ID / handler enum ******/ |
| 39 | |
| 40 | #ifdef vl_msg_id |
| 41 | vl_msg_id(VL_API_SW_INTERFACE_SET_FLAGS, vl_api_sw_interface_set_flags_t_handler) |
| 42 | vl_msg_id(VL_API_SW_INTERFACE_SET_FLAGS_REPLY, vl_api_sw_interface_set_flags_reply_t_handler) |
Dave Barach | af57799 | 2019-07-26 08:26:03 -0400 | [diff] [blame^] | 43 | #endif |
andrew | 7403026 | 2018-08-05 21:18:45 -0400 | [diff] [blame] | 44 | /****** Message names ******/ |
| 45 | |
| 46 | #ifdef vl_msg_name |
| 47 | vl_msg_name(vl_api_sw_interface_set_flags_t, 1) |
| 48 | vl_msg_name(vl_api_sw_interface_set_flags_reply_t, 1) |
Dave Barach | af57799 | 2019-07-26 08:26:03 -0400 | [diff] [blame^] | 49 | #endif |
andrew | 7403026 | 2018-08-05 21:18:45 -0400 | [diff] [blame] | 50 | /****** Message name, crc list ******/ |
| 51 | |
| 52 | #ifdef vl_msg_name_crc_list |
| 53 | #define foreach_vl_msg_name_crc_interface \ |
| 54 | _(VL_API_SW_INTERFACE_SET_FLAGS, sw_interface_set_flags, f890584a) \ |
| 55 | _(VL_API_SW_INTERFACE_SET_FLAGS_REPLY, sw_interface_set_flags_reply, dfbf3afa) \ |
Dave Barach | af57799 | 2019-07-26 08:26:03 -0400 | [diff] [blame^] | 56 | #endif |
andrew | 7403026 | 2018-08-05 21:18:45 -0400 | [diff] [blame] | 57 | /****** Typedefs *****/ |
| 58 | |
| 59 | #ifdef vl_typedefs |
| 60 | #ifndef defined_sw_interface_set_flags |
| 61 | #define defined_sw_interface_set_flags |
| 62 | typedef VL_API_PACKED(struct _vl_api_sw_interface_set_flags { |
| 63 | u16 _vl_msg_id; |
| 64 | u32 client_index; |
| 65 | u32 context; |
| 66 | u32 sw_if_index; |
| 67 | u8 admin_up_down; |
| 68 | }) vl_api_sw_interface_set_flags_t; |
| 69 | #endif |
| 70 | |
| 71 | #ifndef defined_sw_interface_set_flags_reply |
| 72 | #define defined_sw_interface_set_flags_reply |
| 73 | typedef VL_API_PACKED(struct _vl_api_sw_interface_set_flags_reply { |
| 74 | u16 _vl_msg_id; |
| 75 | u32 context; |
| 76 | i32 retval; |
| 77 | }) vl_api_sw_interface_set_flags_reply_t; |
| 78 | #endif |
| 79 | ... |
| 80 | #endif /* vl_typedefs */ |
| 81 | |
| 82 | To change the admin state of an interface, a binary api client sends a |
| 83 | `vl_api_sw_interface_set_flags_t |
| 84 | <https://docs.fd.io/vpp/18.11/dc/da3/structvl__api__sw__interface__set__flags__t.html>`_ |
| 85 | to VPP, which will respond with a |
| 86 | vl_api_sw_interface_set_flags_reply_t message. |
| 87 | |
| 88 | Multiple layers of software, transport types, and shared libraries |
| 89 | implement a variety of features: |
| 90 | |
| 91 | * API message allocation, tracing, pretty-printing, and replay. |
| 92 | * Message transport via global shared memory, pairwise/private shared memory, and sockets. |
| 93 | * Barrier synchronization of worker threads across thread-unsafe message handlers. |
| 94 | |
| 95 | Correctly-coded message handlers know nothing about the transport used |
| 96 | to deliver messages to/from VPP. It's reasonably straighforward to use |
| 97 | multiple API message transport types simultaneously. |
| 98 | |
| 99 | For historical reasons, binary api messages are (putatively) sent in |
| 100 | network byte order. As of this writing, we're seriously considering |
| 101 | whether that choice makes sense. |
| 102 | |
| 103 | Message Allocation |
| 104 | __________________ |
| 105 | |
| 106 | Since binary API messages are always processed in order, we allocate |
| 107 | messages using a ring allocator whenever possible. This scheme is |
| 108 | extremely fast when compared with a traditional memory allocator, and |
| 109 | doesn't cause heap fragmentation. See `src/vlibmemory/memory_shared.c |
| 110 | <https://docs.fd.io/vpp/18.11/dd/d0d/memory__shared_8c.html>`_ |
| 111 | `vl_msg_api_alloc_internal() |
| 112 | <https://docs.fd.io/vpp/18.11/dd/d0d/memory__shared_8c.html#ac6b6797850e1a53bc68b206e6b8413fb>`_. |
| 113 | |
| 114 | Regardless of transport, binary api messages always follow a `msgbuf_t <https://docs.fd.io/vpp/18.11/d9/d65/structmsgbuf__.html>`_ header: |
| 115 | |
| 116 | .. code-block:: C |
| 117 | |
| 118 | /** Message header structure */ |
| 119 | typedef struct msgbuf_ |
| 120 | { |
| 121 | svm_queue_t *q; /**< message allocated in this shmem ring */ |
| 122 | u32 data_len; /**< message length not including header */ |
| 123 | u32 gc_mark_timestamp; /**< message garbage collector mark TS */ |
| 124 | u8 data[0]; /**< actual message begins here */ |
| 125 | } msgbuf_t; |
| 126 | |
| 127 | This structure makes it easy to trace messages without having to |
| 128 | decode them - simply save data_len bytes - and allows |
| 129 | `vl_msg_api_free() |
| 130 | <https://docs.fd.io/vpp/18.11/d6/d1b/api__common_8h.html#aff61e777fe5df789121d8e78134867e6>`_ |
| 131 | to rapidly dispose of message buffers: |
| 132 | |
| 133 | .. code-block:: C |
| 134 | |
| 135 | void |
| 136 | vl_msg_api_free (void *a) |
| 137 | { |
| 138 | msgbuf_t *rv; |
| 139 | void *oldheap; |
| 140 | api_main_t *am = &api_main; |
| 141 | |
| 142 | rv = (msgbuf_t *) (((u8 *) a) - offsetof (msgbuf_t, data)); |
| 143 | |
| 144 | /* |
| 145 | * Here's the beauty of the scheme. Only one proc/thread has |
| 146 | * control of a given message buffer. To free a buffer, we just clear the |
| 147 | * queue field, and leave. No locks, no hits, no errors... |
| 148 | */ |
| 149 | if (rv->q) |
| 150 | { |
| 151 | rv->q = 0; |
| 152 | rv->gc_mark_timestamp = 0; |
| 153 | <more code...> |
| 154 | return; |
| 155 | } |
| 156 | <more code...> |
| 157 | } |
| 158 | |
| 159 | Message Tracing and Replay |
| 160 | __________________________ |
| 161 | |
| 162 | It's extremely important that VPP can capture and replay sizeable |
| 163 | binary API traces. System-level issues involving hundreds of thousands |
| 164 | of API transactions can be re-run in a second or less. Partial replay |
| 165 | allows one to binary-search for the point where the wheels fall |
| 166 | off. One can add scaffolding to the data plane, to trigger when |
| 167 | complex conditions obtain. |
| 168 | |
| 169 | With binary API trace, print, and replay, system-level bug reports of |
| 170 | the form "after 300,000 API transactions, the VPP data-plane stopped |
| 171 | forwarding traffic, FIX IT!" can be solved offline. |
| 172 | |
| 173 | More often than not, one discovers that a control-plane client |
| 174 | misprograms the data plane after a long time or under complex |
| 175 | circumstances. Without direct evidence, "it's a data-plane problem!" |
| 176 | |
| 177 | See `src/vlibmemory/memory_vlib::c |
| 178 | <https://docs.fd.io/vpp/18.11/dd/d3e/vpp__get__metrics_8c.html#a7c3855ed3c45b48ff92a7e881bfede73>`_ |
| 179 | `vl_msg_api_process_file() |
| 180 | <https://docs.fd.io/vpp/18.11/d0/d5b/vlib__api__cli_8c.html#a60194e3e91c0dc6a75906ea06f4ec113>`_, |
| 181 | and `src/vlibapi/api_shared.c |
| 182 | <https://docs.fd.io/vpp/18.11/d6/dd1/api__shared_8c.html>`_. See also |
| 183 | the debug CLI command "api trace" |
| 184 | |
Dave Barach | af57799 | 2019-07-26 08:26:03 -0400 | [diff] [blame^] | 185 | API trace replay caveats |
| 186 | ________________________ |
| 187 | |
| 188 | The vpp instance which replays a binary API trace must have the same |
| 189 | message-ID numbering space as the vpp instance which captured the |
| 190 | trace. The replay instance **must** load the same set of plugins as |
| 191 | the capture instance. Otherwise, API messages will be processed by the |
| 192 | **wrong** API message handlers! |
| 193 | |
| 194 | Always start vpp with command-line arguments which include an |
| 195 | "api-trace on" stanza, so vpp will start tracing binary API messages |
| 196 | from the beginning: |
| 197 | |
| 198 | .. code-block:: console |
| 199 | |
| 200 | api-trace { |
| 201 | on |
| 202 | } |
| 203 | |
| 204 | Given a binary api trace in /tmp/api_trace, do the following to work |
| 205 | out the set of plugins: |
| 206 | |
| 207 | .. code-block:: console |
| 208 | |
| 209 | DBGvpp# api trace custom-dump /tmp/api_trace |
| 210 | vl_api_trace_plugin_msg_ids: abf_54307ba2 first 846 last 855 |
| 211 | vl_api_trace_plugin_msg_ids: acl_0d7265b0 first 856 last 893 |
| 212 | vl_api_trace_plugin_msg_ids: cdp_8f707b96 first 894 last 895 |
| 213 | vl_api_trace_plugin_msg_ids: flowprobe_f2f0286c first 898 last 901 |
| 214 | <etc> |
| 215 | |
| 216 | Here, we see the "abf," "acl," "cdp," and "flowprobe" plugins. Use the |
| 217 | list of plugins to construct a matching "plugins" command-line argument |
| 218 | stanza: |
| 219 | |
| 220 | .. code-block:: console |
| 221 | |
| 222 | plugins { |
| 223 | ## Disable all plugins, selectively enable specific plugins |
| 224 | plugin default { disable } |
| 225 | plugin abf_plugin.so { enable } |
| 226 | plugin acl_plugin.so { enable } |
| 227 | plugin cdp_plugin.so { enable } |
| 228 | plugin flowprobe_plugin.so { enable } |
| 229 | } |
| 230 | |
| 231 | To begin with, use the same vpp image that captured a trace to replay |
| 232 | it. It's perfectly fair to rebuild the vpp replay instance, to add |
| 233 | scaffolding to facilitate setting gdb breakpoints on complex |
| 234 | conditions or similar. |
| 235 | |
| 236 | API trace interface issues |
| 237 | __________________________ |
| 238 | |
| 239 | Along the same lines, it may be necessary to manufacture [simulated] |
| 240 | physical interfaces so that an API trace will replay correctly. "show |
| 241 | interface" on the trace origin system can help. An API trace |
| 242 | "custom-dump" as shown above may make it obvious how many loopback |
| 243 | interfaces to create. If you see vhost interfaces being created and |
| 244 | then configured, the first such configuration message in the trace |
| 245 | will tell you how many physical interfaces were involved. |
| 246 | |
| 247 | .. code-block:: console |
| 248 | |
| 249 | SCRIPT: create_vhost_user_if socket /tmp/foosock server |
| 250 | SCRIPT: sw_interface_set_flags sw_if_index 3 admin-up |
| 251 | |
| 252 | In this case, it's fair to guess that one needs to create two loopback |
| 253 | interfaces to "help" the trace replay correctly. |
| 254 | |
| 255 | These issues can be mitigated to a certain extent by replaying the |
| 256 | trace on the system which created it, but in a field debug case that's |
| 257 | not a realistic. |
| 258 | |
andrew | 7403026 | 2018-08-05 21:18:45 -0400 | [diff] [blame] | 259 | Client connection details |
| 260 | _________________________ |
| 261 | |
| 262 | Establishing a binary API connection to VPP from a C-language client is easy: |
| 263 | |
| 264 | .. code-block:: C |
| 265 | |
| 266 | int |
| 267 | connect_to_vpe (char *client_name, int client_message_queue_length) |
| 268 | { |
| 269 | vat_main_t *vam = &vat_main; |
| 270 | api_main_t *am = &api_main; |
Dave Barach | af57799 | 2019-07-26 08:26:03 -0400 | [diff] [blame^] | 271 | if (vl_client_connect_to_vlib ("/vpe-api", client_name, |
andrew | 7403026 | 2018-08-05 21:18:45 -0400 | [diff] [blame] | 272 | client_message_queue_length) < 0) |
| 273 | return -1; |
| 274 | /* Memorize vpp's binary API message input queue address */ |
| 275 | vam->vl_input_queue = am->shmem_hdr->vl_input_queue; |
| 276 | /* And our client index */ |
| 277 | vam->my_client_index = am->my_client_index; |
| 278 | return 0; |
Dave Barach | af57799 | 2019-07-26 08:26:03 -0400 | [diff] [blame^] | 279 | } |
andrew | 7403026 | 2018-08-05 21:18:45 -0400 | [diff] [blame] | 280 | |
| 281 | 32 is a typical value for client_message_queue_length. VPP *cannot* |
| 282 | block when it needs to send an API message to a binary API client. The |
| 283 | VPP-side binary API message handlers are very fast. So, when sending |
| 284 | asynchronous messages, make sure to scrape the binary API rx ring with |
| 285 | some enthusiasm! |
| 286 | |
| 287 | **Binary API message RX pthread** |
| 288 | |
| 289 | Calling `vl_client_connect_to_vlib |
| 290 | <https://docs.fd.io/vpp/18.11/da/d25/memory__client_8h.html#a6654b42c91be33bfb6a4b4bfd2327920>`_ |
| 291 | spins up a binary API message RX pthread: |
| 292 | |
| 293 | .. code-block:: C |
| 294 | |
| 295 | static void * |
| 296 | rx_thread_fn (void *arg) |
| 297 | { |
| 298 | svm_queue_t *q; |
| 299 | memory_client_main_t *mm = &memory_client_main; |
| 300 | api_main_t *am = &api_main; |
| 301 | int i; |
| 302 | |
| 303 | q = am->vl_input_queue; |
| 304 | |
| 305 | /* So we can make the rx thread terminate cleanly */ |
| 306 | if (setjmp (mm->rx_thread_jmpbuf) == 0) |
| 307 | { |
| 308 | mm->rx_thread_jmpbuf_valid = 1; |
| 309 | /* |
| 310 | * Find an unused slot in the per-cpu-mheaps array, |
| 311 | * and grab it for this thread. We need to be able to |
| 312 | * push/pop the thread heap without affecting other thread(s). |
| 313 | */ |
| 314 | if (__os_thread_index == 0) |
| 315 | { |
| 316 | for (i = 0; i < ARRAY_LEN (clib_per_cpu_mheaps); i++) |
| 317 | { |
| 318 | if (clib_per_cpu_mheaps[i] == 0) |
| 319 | { |
| 320 | /* Copy the main thread mheap pointer */ |
| 321 | clib_per_cpu_mheaps[i] = clib_per_cpu_mheaps[0]; |
| 322 | __os_thread_index = i; |
| 323 | break; |
| 324 | } |
| 325 | } |
| 326 | ASSERT (__os_thread_index > 0); |
| 327 | } |
| 328 | while (1) |
| 329 | vl_msg_api_queue_handler (q); |
| 330 | } |
| 331 | pthread_exit (0); |
| 332 | } |
| 333 | |
| 334 | To handle the binary API message queue yourself, use |
| 335 | `vl_client_connect_to_vlib_no_rx_pthread |
| 336 | <https://docs.fd.io/vpp/18.11/da/d25/memory__client_8h.html#a11b9577297106c57c0783b96ab190c36>`_. |
| 337 | |
| 338 | **Queue non-empty signalling** |
| 339 | |
| 340 | vl_msg_api_queue_handler(...) uses mutex/condvar signalling to wake |
| 341 | up, process VPP -> client traffic, then sleep. VPP supplies a condvar |
| 342 | broadcast when the VPP -> client API message queue transitions from |
| 343 | empty to nonempty. |
| 344 | |
| 345 | VPP checks its own binary API input queue at a very high rate. VPP |
| 346 | invokes message handlers in "process" context [aka cooperative |
| 347 | multitasking thread context] at a variable rate, depending on |
| 348 | data-plane packet processing requirements. |
| 349 | |
| 350 | Client disconnection details |
| 351 | ____________________________ |
| 352 | |
| 353 | To disconnect from VPP, call `vl_client_disconnect_from_vlib |
| 354 | <https://docs.fd.io/vpp/18.11/da/d25/memory__client_8h.html#a82c9ba6e7ead8362ae2175eefcf2fd12>`_. Please |
| 355 | arrange to call this function if the client application terminates |
| 356 | abnormally. VPP makes every effort to hold a decent funeral for dead |
| 357 | clients, but VPP can't guarantee to free leaked memory in the shared |
| 358 | binary API segment. |
| 359 | |
| 360 | Sending binary API messages to VPP |
| 361 | __________________________________ |
| 362 | |
| 363 | The point of the exercise is to send binary API messages to VPP, and |
| 364 | to receive replies from VPP. Many VPP binary APIs comprise a client |
| 365 | request message, and a simple status reply. For example, to set the |
| 366 | admin status of an interface: |
| 367 | |
| 368 | .. code-block:: C |
| 369 | |
| 370 | vl_api_sw_interface_set_flags_t *mp; |
| 371 | mp = vl_msg_api_alloc (sizeof (*mp)); |
| 372 | memset (mp, 0, sizeof (*mp)); |
| 373 | mp->_vl_msg_id = clib_host_to_net_u16 (VL_API_SW_INTERFACE_SET_FLAGS); |
| 374 | mp->client_index = api_main.my_client_index; |
| 375 | mp->sw_if_index = clib_host_to_net_u32 (<interface-sw-if-index>); |
| 376 | vl_msg_api_send (api_main.shmem_hdr->vl_input_queue, (u8 *)mp); |
| 377 | |
| 378 | Key points: |
| 379 | |
| 380 | * Use `vl_msg_api_alloc <https://docs.fd.io/vpp/18.11/dc/d5a/memory__shared_8h.html#a109ff1e95ebb2c968d43c100c4a1c55a>`_ to allocate message buffers |
| 381 | * Allocated message buffers are not initialized, and must be presumed to contain trash. |
| 382 | * Don't forget to set the _vl_msg_id field! |
| 383 | * As of this writing, binary API message IDs and data are sent in network byte order |
| 384 | * The client-library global data structure `api_main <https://docs.fd.io/vpp/18.11/d6/dd1/api__shared_8c.html#af58e3e46b569573e9622b826b2f47a22>`_ keeps track of sufficient pointers and handles used to communicate with VPP |
| 385 | |
| 386 | Receiving binary API messages from VPP |
| 387 | ______________________________________ |
| 388 | |
| 389 | Unless you've made other arrangements (see |
| 390 | `vl_client_connect_to_vlib_no_rx_pthread |
| 391 | <https://docs.fd.io/vpp/18.11/da/d25/memory__client_8h.html#a11b9577297106c57c0783b96ab190c36>`_), |
| 392 | *messages are received on a separate rx pthread*. Synchronization with |
| 393 | the client application main thread is the responsibility of the |
| 394 | application! |
| 395 | |
| 396 | Set up message handlers about as follows: |
| 397 | |
| 398 | .. code-block:: C |
| 399 | |
| 400 | #define vl_typedefs /* define message structures */ |
| 401 | #include <vpp/api/vpe_all_api_h.h> |
| 402 | #undef vl_typedefs |
| 403 | /* declare message handlers for each api */ |
| 404 | #define vl_endianfun /* define message structures */ |
| 405 | #include <vpp/api/vpe_all_api_h.h> |
| 406 | #undef vl_endianfun |
| 407 | /* instantiate all the print functions we know about */ |
| 408 | #define vl_print(handle, ...) |
| 409 | #define vl_printfun |
| 410 | #include <vpp/api/vpe_all_api_h.h> |
| 411 | #undef vl_printfun |
| 412 | /* Define a list of all message that the client handles */ |
| 413 | #define foreach_vpe_api_reply_msg \ |
Dave Barach | af57799 | 2019-07-26 08:26:03 -0400 | [diff] [blame^] | 414 | _(SW_INTERFACE_SET_FLAGS_REPLY, sw_interface_set_flags_reply) |
andrew | 7403026 | 2018-08-05 21:18:45 -0400 | [diff] [blame] | 415 | static clib_error_t * |
| 416 | my_api_hookup (vlib_main_t * vm) |
| 417 | { |
| 418 | api_main_t *am = &api_main; |
| 419 | #define _(N,n) \ |
| 420 | vl_msg_api_set_handlers(VL_API_##N, #n, \ |
| 421 | vl_api_##n##_t_handler, \ |
| 422 | vl_noop_handler, \ |
| 423 | vl_api_##n##_t_endian, \ |
| 424 | vl_api_##n##_t_print, \ |
| 425 | sizeof(vl_api_##n##_t), 1); |
| 426 | foreach_vpe_api_msg; |
| 427 | #undef _ |
| 428 | return 0; |
| 429 | } |
| 430 | |
| 431 | The key API used to establish message handlers is |
| 432 | `vl_msg_api_set_handlers |
| 433 | <https://docs.fd.io/vpp/18.11/d6/dd1/api__shared_8c.html#aa8a8e1f3876ec1a02f283c1862ecdb7a>`_ |
| 434 | , which sets values in multiple parallel vectors in the `api_main_t |
| 435 | <https://docs.fd.io/vpp/18.11/dd/db2/structapi__main__t.html>`_ |
| 436 | structure. As of this writing: not all vector element values can be |
| 437 | set through the API. You'll see sporadic API message registrations |
| 438 | followed by minor adjustments of this form: |
| 439 | |
| 440 | .. code-block:: C |
| 441 | |
| 442 | /* |
| 443 | * Thread-safe API messages |
| 444 | */ |
| 445 | am->is_mp_safe[VL_API_IP_ADD_DEL_ROUTE] = 1; |
| 446 | am->is_mp_safe[VL_API_GET_NODE_GRAPH] = 1; |
| 447 | |
| 448 | API message numbering in plugins |
| 449 | -------------------------------- |
| 450 | |
Dave Barach | af57799 | 2019-07-26 08:26:03 -0400 | [diff] [blame^] | 451 | Binary API message numbering in plugins relies on vpp to issue a block |
andrew | 7403026 | 2018-08-05 21:18:45 -0400 | [diff] [blame] | 452 | of message-ID's for the plugin to use: |
| 453 | |
| 454 | .. code-block:: C |
| 455 | |
| 456 | static clib_error_t * |
| 457 | my_init (vlib_main_t * vm) |
| 458 | { |
| 459 | my_main_t *mm = &my_main; |
| 460 | |
| 461 | name = format (0, "myplugin_%08x%c", api_version, 0); |
| 462 | |
| 463 | /* Ask for a correctly-sized block of API message decode slots */ |
| 464 | mm->msg_id_base = vl_msg_api_get_msg_ids |
| 465 | ((char *) name, VL_MSG_FIRST_AVAILABLE); |
| 466 | |
| 467 | } |
| 468 | |
| 469 | Control-plane codes use the vl_client_get_first_plugin_msg_id (...) api |
| 470 | to recover the message ID block base: |
| 471 | |
| 472 | .. code-block:: C |
| 473 | |
| 474 | /* Ask the vpp engine for the first assigned message-id */ |
| 475 | name = format (0, "myplugin_%08x%c", api_version, 0); |
| 476 | sm->msg_id_base = vl_client_get_first_plugin_msg_id ((char *) name); |
| 477 | |
| 478 | It's a fairly common error to forget to add msg_id_base when |
| 479 | registering message handlers, or when sending messages. Using macros |
| 480 | from .../src/vlibapi/api_helper_macros.h can automate the process, but |
| 481 | remember to #define REPLY_MSG_ID_BASE before #including the file: |
| 482 | |
| 483 | .. code-block:: C |
| 484 | |
| 485 | #define REPLY_MSG_ID_BASE mm->msg_id_base |
| 486 | #include <vlibapi/api_helper_macros.h> |