blob: 425332661e198f408bdc79a44b8f451b027e18aa [file] [log] [blame]
Jackie Huang799759c2019-11-13 15:45:51 +08001.. This work is licensed under a Creative Commons Attribution 4.0 International License.
2.. SPDX-License-Identifier: CC-BY-4.0
3.. Copyright (C) 2019 Wind River Systems, Inc.
4
5
6Installation Guide
7==================
8
9.. contents::
10 :depth: 3
11 :local:
12
13Abstract
14--------
15
16This document describes how to install O-RAN INF image, example configuration for better
17real time performance, and example deployment of Kubernetes cluster and plugins.
18
19The audience of this document is assumed to have basic knowledge in Yocto/Open-Embedded Linux
20and container technology.
21
22Version history
23
24+--------------------+--------------------+--------------------+--------------------+
25| **Date** | **Ver.** | **Author** | **Comment** |
26| | | | |
27+--------------------+--------------------+--------------------+--------------------+
28| 2019-11-02 | 1.0.0 | Jackie Huang | Initail version |
29| | | | |
30+--------------------+--------------------+--------------------+--------------------+
31| | | | |
32| | | | |
33+--------------------+--------------------+--------------------+--------------------+
34| | | | |
35| | | | |
36+--------------------+--------------------+--------------------+--------------------+
37
38
39Preface
40-------
41
42Before starting the installation and deployment of O-RAN INF, you need to download the ISO image or build from source as described in developer-guide.
43
44
45Hardware Requirements
46---------------------
47
48Following minimum hardware requirements must be met for installation of O-RAN INF image:
49
50+--------------------+----------------------------------------------------+
51| **HW Aspect** | **Requirement** |
52| | |
53+--------------------+----------------------------------------------------+
54| **# of servers** | 1 |
55+--------------------+----------------------------------------------------+
56| **CPU** | 2 |
57| | |
58+--------------------+----------------------------------------------------+
59| **RAM** | 4G |
60| | |
61+--------------------+----------------------------------------------------+
62| **Disk** | 20G |
63| | |
64+--------------------+----------------------------------------------------+
65| **NICs** | 1 |
66| | |
67+--------------------+----------------------------------------------------+
68
69
70
71Software Installation and Deployment
72------------------------------------
73
741. Installation from the O-RAN INF ISO image
75````````````````````````````````````````````
76
77- Please see the README.md file for how to build the image.
78- The Image is a live ISO image with CLI installer: oran-image-inf-host-intel-x86-64.iso
79
801.1 Burn the image to the USB device
81''''''''''''''''''''''''''''''''''''
82
83- Assume the the usb device is /dev/sdX here
84
85::
86
87 $ sudo dd if=/path/to/oran-image-inf-host-intel-x86-64.iso of=/dev/sdX bs=1M
88
891.2 Insert the USB device in the target to be booted.
90'''''''''''''''''''''''''''''''''''''''''''''''''''''
91
921.3 Reboot the target from the USB device.
93''''''''''''''''''''''''''''''''''''''''''
94
951.4 Select "Graphics console install" or "Serial console install" and press ENTER
96'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
97
981.5 Select the hard disk and press ENTER
99''''''''''''''''''''''''''''''''''''''''
100
101Notes: In this installer, you can only select which hard disk to install, the whole disk will be used and partitioned automatically.
102
103- e.g. insert "sda" and press ENTER
104
1051.6 Remove the USB device and press ENTER to reboot
106'''''''''''''''''''''''''''''''''''''''''''''''''''
107
1082. Configuration for better real time performance
109`````````````````````````````````````````````````
110
111Notes: Some of the tuning options are machine specific or depend on use cases,
112like the hugepages, isolcpus, rcu_nocbs, kthread_cpus, irqaffinity, nohz_full and
113so on, please do not just copy and past.
114
115- Edit the grub.cfg with the following example tuning options
116
117::
118
119 # Notes: the grub.cfg file path is different for legacy and UEFI mode
120 # For legacy mode: /boot/grub/grub.cfg
121 # For UEFI mode: /boot/EFI/BOOT/grub.cfg
122
123 grub_cfg="/boot/grub/grub.cfg"
124 #grub_cfg="/boot/EFI/BOOT/grub.cfg"
125
126 # In this example, 1-16 cores are isolated for real time processes
127 root@intel-x86-64:~# rt_tuning="crashkernel=auto biosdevname=0 iommu=pt usbcore.autosuspend=-1 nmi_watchdog=0 softlockup_panic=0 intel_iommu=on cgroup_enable=memory skew_tick=1 hugepagesz=1G hugepages=4 default_hugepagesz=1G isolcpus=1-16 rcu_nocbs=1-16 kthread_cpus=0 irqaffinity=0 nohz=on nohz_full=1-16 intel_idle.max_cstate=0 processor.max_cstate=1 intel_pstate=disable nosoftlockup idle=poll mce=ignore_ce"
128
129 # optional to add the console setting
130 root@intel-x86-64:~# console="console=ttyS0,115200"
131
132 root@intel-x86-64:~# sed -i "/linux / s/$/ $console $rt_tuning/" $grub_cfg
133
134
135- Reboot the target
136
137::
138
139 root@intel-x86-64:~# reboot
140
1413. Kubernetes cluster and plugins deployment instructions (All-in-one)
142``````````````````````````````````````````````````````````````````````
143This instruction will show you how to deploy kubernetes cluster and plugins in an all-in-one example scenario after the above installation.
144
1453.1 Change the hostname (Optional)
146''''''''''''''''''''''''''''''''''
147
148::
149
150 # Assuming the hostname is oran-aio, ip address is <aio_host_ip>
151 # please DO NOT copy and paste, use your actaul hostname and ip address
152 root@intel-x86-64:~# echo oran-aio > /etc/hostname
153 root@intel-x86-64:~# export AIO_HOST_IP="<aio_host_ip>"
154 root@intel-x86-64:~# echo "$AIO_HOST_IP oran-aio" >> /etc/hosts
155
1563.2 Disable swap for Kubernetes
157'''''''''''''''''''''''''''''''
158
159::
160
161 root@intel-x86-64:~# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
162 root@intel-x86-64:~# systemctl mask dev-sda4.swap
163
1643.3 Set the proxy for docker (Optional)
165'''''''''''''''''''''''''''''''''''''''
166
167- If you are under a firewall, you may need to set the proxy for docker to pull images
168
169::
170
171 root@intel-x86-64:~# HTTP_PROXY="http://<your_proxy_server_ip>:<port>"
172 root@intel-x86-64:~# mkdir /etc/systemd/system/docker.service.d/
173 root@intel-x86-64:~# cat << EOF > /etc/systemd/system/docker.service.d/http-proxy.conf
174 [Service]
175 Environment="HTTP_PROXY=$HTTP_PROXY" "NO_PROXY=localhost,127.0.0.1,localaddress,.localdomain.com,$AIO_HOST_IP,10.244.0.0/16"
176 EOF
177
1783.4 Reboot the target
179'''''''''''''''''''''
180
181::
182
183 root@intel-x86-64:~# reboot
184
1853.5 Initialize kubernetes cluster master
186''''''''''''''''''''''''''''''''''''''''
187
188::
189
Bin Yange71c66a2020-03-27 04:03:43 +0000190 root@oran-aio:~# kubeadm init --kubernetes-version v1.16.2 --pod-network-cidr=10.244.0.0/16
Jackie Huang799759c2019-11-13 15:45:51 +0800191 root@oran-aio:~# mkdir -p $HOME/.kube
192 root@oran-aio:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
193 root@oran-aio:~# chown $(id -u):$(id -g) $HOME/.kube/config
194
1953.6 Make the master also works as a worker node
196'''''''''''''''''''''''''''''''''''''''''''''''
197
198::
199
200 root@oran-aio:~# kubectl taint nodes oran-aio node-role.kubernetes.io/master-
201
2023.7 Deploy flannel
203''''''''''''''''''
204
205::
206
207 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/flannel/kube-flannel.yml
208
209Check that the aio node is ready after flannel is successfully deployed and running
210
211::
212
213 root@oran-aio:~# kubectl get pods --all-namespaces |grep flannel
214 kube-system kube-flannel-ds-amd64-bwt52 1/1 Running 0 3m24s
215
216 root@oran-aio:~# kubectl get nodes
217 NAME STATUS ROLES AGE VERSION
218 oran-aio Ready master 3m17s v1.15.2-dirty
219
2203.8 Deploy kubernetes dashboard
221'''''''''''''''''''''''''''''''
222
223Deploy kubernetes dashboard
224
225::
226
227 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/kubernetes-dashboard/kubernetes-dashboard-admin.rbac.yaml
228 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/kubernetes-dashboard/kubernetes-dashboard.yaml
229
230Verify that the dashboard is up and running
231
232::
233
234 # Check the pod for dashboard
235 root@oran-aio:~# kubectl get pods --all-namespaces |grep dashboard
236 kube-system kubernetes-dashboard-5b67bf4d5f-ghg4f 1/1 Running 0 64s
237
Jackie Huang6038ac72019-11-15 09:57:19 +0800238Access the dashboard UI in a web browser with the https url, port number is 30443.
Jackie Huang799759c2019-11-13 15:45:51 +0800239
240- For detail usage, please refer to `Doc for dashboard`_
241
242.. _`Doc for dashboard`: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
243
2443.9 Deploy Multus-CNI
245'''''''''''''''''''''
246
247::
248
249 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/multus-cni/multus-daemonset.yml
250
251Verify that the multus-cni is up and running
252
253::
254
255 root@oran-aio:~# kubectl get pods --all-namespaces | grep -i multus
256 kube-system kube-multus-ds-amd64-hjpk4 1/1 Running 0 7m34s
257
258- For further validating, please refer to the `Multus-CNI quick start`_
259
260.. _`Multus-CNI quick start`: https://github.com/intel/multus-cni/blob/master/doc/quickstart.md
261
2623.10 Deploy NFD (node-feature-discovery)
263''''''''''''''''''''''''''''''''''''''''
264
265::
266
267 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/node-feature-discovery/nfd-master.yaml
268 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/node-feature-discovery/nfd-worker-daemonset.yaml
269
270Verify that nfd-master and nfd-worker are up and running
271
272::
273
274 root@oran-aio:~# kubectl get pods --all-namespaces |grep nfd
275 default nfd-master-7v75k 1/1 Running 0 91s
276 default nfd-worker-xn797 1/1 Running 0 24s
277
278Verify that the node is labeled by nfd:
279
280::
281
282 root@oran-aio:~# kubectl describe nodes|grep feature.node.kubernetes
283 feature.node.kubernetes.io/cpu-cpuid.AESNI=true
284 feature.node.kubernetes.io/cpu-cpuid.AVX=true
285 feature.node.kubernetes.io/cpu-cpuid.AVX2=true
286 (...snip...)
287
Bin Yange71c66a2020-03-27 04:03:43 +00002883.11 Deploy SRIOV CNI
289'''''''''''''''''''''
290
291Provision VF drivers and devices
292
293
294Enumerate PF Devices
295
296::
297
298 root@oran-aio:~/dpdk-18.08/usertools# lspci -D |grep 82599
299 0000:04:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
300 0000:04:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
301
302Correlate the PF device to eth interfaces and bring them up
303
304::
305
306 root@oran-aio:~# ethtool -i eth4 |grep bus-info
307 bus-info: 0000:04:00.0
308 root@oran-aio:~# ethtool -i eth5 |grep bus-info
309 bus-info: 0000:04:00.1
310 root@oran-aio:~# ifconfig eth4 up
311 root@oran-aio:~# ifconfig eth5 up
312
313Load VF Driver modules
314
315::
316
317 root@oran-aio:~# modprobe ixgbevf
318 root@oran-aio:~# modprobe uio
319 root@oran-aio:~# modprobe igb-uio
320 root@oran-aio:~# modprobe vfio
321 root@oran-aio:~# modprobe vfio-pci
322 root@oran-aio:~# lsmod |grep ixgbevf
323 ixgbevf 61440 0
324 root@oran-aio:~# lsmod |grep vfio
325 vfio_pci 40960 0
326 vfio_virqfd 16384 1 vfio_pci
327 vfio_iommu_type1 24576 0
328 vfio 24576 2 vfio_iommu_type1,vfio_pci
329 irqbypass 16384 2 vfio_pci,kvm
330
331
332Bind VF drivers to VF devices
333
334::
335
336 root@oran-aio:~# cat /sys/bus/pci/devices/0000\:04\:00.0/sriov_totalvfs
337 root@oran-aio:~# cat /sys/bus/pci/devices/0000\:04\:00.1/sriov_totalvfs
338 root@oran-aio:~# cat /sys/bus/pci/devices/0000\:04\:00.0/sriov_numvfs
339 root@oran-aio:~# cat /sys/bus/pci/devices/0000\:04\:00.1/sriov_numvfs
340 root@oran-aio:~# echo 8 > /sys/bus/pci/devices/0000\:04\:00.0/sriov_numvfs
341 root@oran-aio:~# echo 8 > /sys/bus/pci/devices/0000\:04\:00.1/sriov_numvfs
342
343 root@oran-aio:~# lspci -D |grep 82599
344 0000:04:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
345 0000:04:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
346 0000:04:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
347 0000:04:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
348 0000:04:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
349 0000:04:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
350 0000:04:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
351 0000:04:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
352 0000:04:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
353 0000:04:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
354 0000:04:11.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
355 0000:04:11.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
356 0000:04:11.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
357 0000:04:11.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
358 0000:04:11.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
359 0000:04:11.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
360 0000:04:11.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
361 0000:04:11.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
362
363 root@oran-aio:~# dpdk-devbind -b vfio-pci 0000:04:11.0 0000:04:11.1 0000:04:11.2 0000:04:11.3 0000:04:11.4 0000:04:11.5 0000:04:11.6 0000:04:11.7
364
365 root@oran-aio:~# dpdk-devbind --status-dev net
366
367 Network devices using DPDK-compatible driver
368 ============================================
369 0000:04:11.0 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
370 0000:04:11.1 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
371 0000:04:11.2 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
372 0000:04:11.3 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
373 0000:04:11.4 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
374 0000:04:11.5 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
375 0000:04:11.6 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
376 0000:04:11.7 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
377
378 Network devices using kernel driver
379 ===================================
380 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eth4 drv=ixgbe unused=igb_uio,vfio-pci
381 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eth5 drv=ixgbe unused=igb_uio,vfio-pci
382 0000:04:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=eth6 drv=ixgbevf unused=igb_uio,vfio-pci
383 0000:04:10.1 '82599 Ethernet Controller Virtual Function 10ed' if=eth14 drv=ixgbevf unused=igb_uio,vfio-pci
384 0000:04:10.2 '82599 Ethernet Controller Virtual Function 10ed' if=eth7 drv=ixgbevf unused=igb_uio,vfio-pci
385 0000:04:10.3 '82599 Ethernet Controller Virtual Function 10ed' if=eth15 drv=ixgbevf unused=igb_uio,vfio-pci
386 0000:04:10.4 '82599 Ethernet Controller Virtual Function 10ed' if=eth8 drv=ixgbevf unused=igb_uio,vfio-pci
387 0000:04:10.5 '82599 Ethernet Controller Virtual Function 10ed' if=eth16 drv=ixgbevf unused=igb_uio,vfio-pci
388 0000:04:10.6 '82599 Ethernet Controller Virtual Function 10ed' if= drv=ixgbevf unused=igb_uio,vfio-pci
389 0000:04:10.7 '82599 Ethernet Controller Virtual Function 10ed' if=eth17 drv=ixgbevf unused=igb_uio,vfio-pci
390
391
392Build SRIOV CNI
393
394::
395
396 root@oran-aio:~# HTTP_PROXY="http://<your_proxy_server_ip>:<port>"
397
398 root@oran-aio:~# wget https://dl.google.com/go/go1.14.1.linux-amd64.tar.gz
399 root@oran-aio:~# tar -zxvf go1.14.1.linux-amd64.tar.gz
400 root@oran-aio:~# PATH=$PATH:/root/go/bin/
401 root@oran-aio:~# git clone https://github.com/intel/sriov-cni
402 root@oran-aio:~# cd sriov-cni
403 root@oran-aio:~/sriov-cni# make
404 root@oran-aio:~/sriov-cni# cp build/sriov /opt/cni/bin
405
406 root@oran-aio:~# cd ~/
407 root@oran-aio:~# git clone https://github.com/intel/sriov-network-device-plugin
408 root@oran-aio:~# cd sriov-network-device-plugin
409 root@oran-aio:~/sriov-network-device-plugin# git fetch origin pull/196/head:fpgadp
410 root@oran-aio:~/sriov-network-device-plugin# git checkout fpgadp
411 root@oran-aio:~/sriov-network-device-plugin# make image
412 root@oran-aio:~/sriov-network-device-plugin# docker images |grep sriov-device-plugin
413 nfvpe/sriov-device-plugin latest f4e6bbefad67 5 minutes ago 25.5MB
414
415
416Deploy SRIOV CNI
417
418::
419
420 root@oran-aio:~/sriov-network-device-plugin# cat <<EOF> deployments/sriovdp_configMap.yaml
421 apiVersion: v1
422 kind: ConfigMap
423 metadata:
424 name: sriovdp-config
425 namespace: kube-system
426 data:
427 config.json: |
428 {
429 "resourceList": [{
430 "resourceName": "intel_sriov_netdevice",
431 "selectors": {
432 "vendors": ["8086"],
433 "devices": ["154c", "10ed"],
434 "drivers": ["i40evf", "ixgbevf"]
435 }
436 },
437 {
438 "resourceName": "intel_sriov_dpdk",
439 "selectors": {
440 "vendors": ["8086"],
441 "devices": ["154c", "10ed"],
442 "drivers": ["vfio-pci"]
443 }
444 },
445 {
446 "resourceName": "mlnx_sriov_rdma",
447 "isRdma": true,
448 "selectors": {
449 "vendors": ["15b3"],
450 "devices": ["1018"],
451 "drivers": ["mlx5_ib"]
452 }
453 }
454 ]
455 }
456 EOF
457
458 root@oran-aio:~/sriov-network-device-plugin# kubectl create -f deployments/sriovdp_configMap.yaml
459 root@oran-aio:~/sriov-network-device-plugin# kubectl create -f deployments/k8s-v1.16/sriovdp-daemonset.yaml
460
461 root@oran-aio:~/sriov-network-device-plugin# kubectl get pods --all-namespaces |grep kube-sriov-device-plugin
462 kube-system kube-sriov-device-plugin-amd64-6lm8n 1/1 Running 0 12m
463
464 root@oran-aio:~/sriov-network-device-plugin# kubectl -n kube-system logs kube-sriov-device-plugin-amd64-6lm8n
465 I0327 02:14:46.488409 14488 manager.go:115] Creating new ResourcePool: intel_sriov_netdevice
466 I0327 02:14:46.488427 14488 factory.go:144] device added: [pciAddr: 0000:04:10.0, vendor: 8086, device: 10ed, driver: ixgbevf]
467 I0327 02:14:46.488439 14488 factory.go:144] device added: [pciAddr: 0000:04:10.1, vendor: 8086, device: 10ed, driver: ixgbevf]
468 I0327 02:14:46.488446 14488 factory.go:144] device added: [pciAddr: 0000:04:10.2, vendor: 8086, device: 10ed, driver: ixgbevf]
469 I0327 02:14:46.488459 14488 factory.go:144] device added: [pciAddr: 0000:04:10.3, vendor: 8086, device: 10ed, driver: ixgbevf]
470 I0327 02:14:46.488467 14488 factory.go:144] device added: [pciAddr: 0000:04:10.4, vendor: 8086, device: 10ed, driver: ixgbevf]
471 I0327 02:14:46.488473 14488 factory.go:144] device added: [pciAddr: 0000:04:10.5, vendor: 8086, device: 10ed, driver: ixgbevf]
472 I0327 02:14:46.488479 14488 factory.go:144] device added: [pciAddr: 0000:04:10.6, vendor: 8086, device: 10ed, driver: ixgbevf]
473 I0327 02:14:46.488485 14488 factory.go:144] device added: [pciAddr: 0000:04:10.7, vendor: 8086, device: 10ed, driver: ixgbevf]
474 I0327 02:14:46.488502 14488 manager.go:128] New resource server is created for intel_sriov_netdevice ResourcePool
475 I0327 02:14:46.488511 14488 manager.go:114]
476 I0327 02:14:46.488516 14488 manager.go:115] Creating new ResourcePool: intel_sriov_dpdk
477 I0327 02:14:46.488529 14488 factory.go:144] device added: [pciAddr: 0000:04:11.0, vendor: 8086, device: 10ed, driver: vfio-pci]
478 I0327 02:14:46.488538 14488 factory.go:144] device added: [pciAddr: 0000:04:11.1, vendor: 8086, device: 10ed, driver: vfio-pci]
479 I0327 02:14:46.488545 14488 factory.go:144] device added: [pciAddr: 0000:04:11.2, vendor: 8086, device: 10ed, driver: vfio-pci]
480 I0327 02:14:46.488551 14488 factory.go:144] device added: [pciAddr: 0000:04:11.3, vendor: 8086, device: 10ed, driver: vfio-pci]
481 I0327 02:14:46.488562 14488 factory.go:144] device added: [pciAddr: 0000:04:11.4, vendor: 8086, device: 10ed, driver: vfio-pci]
482 I0327 02:14:46.488569 14488 factory.go:144] device added: [pciAddr: 0000:04:11.5, vendor: 8086, device: 10ed, driver: vfio-pci]
483 I0327 02:14:46.488575 14488 factory.go:144] device added: [pciAddr: 0000:04:11.6, vendor: 8086, device: 10ed, driver: vfio-pci]
484 I0327 02:14:46.488581 14488 factory.go:144] device added: [pciAddr: 0000:04:11.7, vendor: 8086, device: 10ed, driver: vfio-pci]
485 I0327 02:14:46.488591 14488 manager.go:128] New resource server is created for intel_sriov_dpdk ResourcePool
486
487
488Test intel_sriov_netdeivce
489
490::
491
492 root@oran-aio:~/sriov-network-device-plugin# cat <<EOF> deployments/sriov-crd.yaml
493 apiVersion: "k8s.cni.cncf.io/v1"
494 kind: NetworkAttachmentDefinition
495 metadata:
496 name: sriov-net1
497 annotations:
498 k8s.v1.cni.cncf.io/resourceName: intel.com/intel_sriov_netdevice
499 spec:
500 config: '{
501 "type": "sriov",
502 "cniVersion": "0.3.1",
503 "name": "sriov-network",
504 "vlan": 100,
505 "ipam": {
506 "type": "host-local",
507 "subnet": "10.56.217.0/24",
508 "routes": [{
509 "dst": "0.0.0.0/0"
510 }],
511 "gateway": "10.56.217.1"
512 }
513 }'
514 EOF
515
516 root@oran-aio:~/sriov-network-device-plugin# kubectl create -f deployments/sriov-crd.yaml
517 root@oran-aio:~/sriov-network-device-plugin# kubectl create -f deployments/pod-tc1.yaml
518 root@oran-aio:~/sriov-network-device-plugin# kubectl get pods |grep testpod1
519 root@oran-aio:~/sriov-network-device-plugin# ip link |grep 'vlan 100'
520 vf 3 MAC a6:01:0a:34:39:e1, vlan 100, spoof checking on, link-state auto, trust off, query_rss off
521
522 root@oran-aio:~/sriov-network-device-plugin# kubectl exec -it testpod1 -- ip addr show |grep a6:01:0a:34:39:e1 -C 2
523 valid_lft forever preferred_lft forever
524 21: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
525 link/ether a6:01:0a:34:39:e1 brd ff:ff:ff:ff:ff:ff
526 inet 10.56.217.3/24 brd 10.56.217.255 scope global net1
527 valid_lft forever preferred_lft forever
528
529
530Test intel_sriov_dpdk
531
532::
533
534 root@oran-aio:~/sriov-network-device-plugin# cat <<EOF> deployments/sriovdpdk-crd.yaml
535 apiVersion: "k8s.cni.cncf.io/v1"
536 kind: NetworkAttachmentDefinition
537 metadata:
538 name: sriov1-vfio
539 annotations:
540 k8s.v1.cni.cncf.io/resourceName: intel.com/intel_sriov_dpdk
541 spec:
542 config: '{
543 "type": "sriov",
544 "cniVersion": "0.3.1",
545 "vlan": 101,
546 "name": "sriov1-vfio"
547 }'
548 EOF
549
550 root@oran-aio:~/sriov-network-device-plugin# cat <<EOF> deployments/dpdk-1g.yaml
551 apiVersion: v1
552 kind: Pod
553 metadata:
554 name: dpdk-1g
555 annotations:
556 k8s.v1.cni.cncf.io/networks: '[
557 {"name": "sriov1-vfio"},
558 {"name": "sriov1-vfio"}
559 ]'
560 spec:
561 restartPolicy: Never
562 containers:
563 - name: dpdk-1g
564 image: centos/tools
565 imagePullPolicy: IfNotPresent
566 volumeMounts:
567 - mountPath: /mnt/huge-2048
568 name: hugepage
569 - name: lib-modules
570 mountPath: /lib/modules
571 - name: src
572 mountPath: /usr/src
573 command: ["/bin/bash", "-ec", "sleep infinity"]
574 securityContext:
575 privileged: true
576 capabilities:
577 add:
578 - ALL
579 resources:
580 requests:
581 memory: 4Gi
582 hugepages-1Gi: 4Gi
583 intel.com/intel_sriov_dpdk: '2'
584 limits:
585 memory: 4Gi
586 hugepages-1Gi: 4Gi
587 intel.com/intel_sriov_dpdk: '2'
588 imagePullSecrets:
589 - name: admin-registry-secret
590 volumes:
591 - name: hugepage
592 emptyDir:
593 medium: HugePages
594 - name: lib-modules
595 hostPath:
596 path: /lib/modules
597 - name: src
598 hostPath:
599 path: /usr/src
600 imagePullSecrets:
601 - name: admin-registry-secret
602 EOF
603
604 root@oran-aio:~/sriov-network-device-plugin# kubectl create -f deployments/sriovdpdk-crd.yaml
605 root@oran-aio:~/sriov-network-device-plugin# kubectl create -f deployments/dpdk-1g.yaml
606
607 root@oran-aio:~/sriov-network-device-plugin# root@oran-aio:~/sriov-network-device-plugin# kubectl get pods | grep dpdk
608 dpdk-1g 1/1 Running 0 13s
609
610 root@oran-aio:~/sriov-network-device-plugin# ip link |grep 101
611 vf 7 MAC 00:00:00:00:00:00, vlan 101, spoof checking on, link-state auto, trust off, query_rss off
612 vf 6 MAC 00:00:00:00:00:00, vlan 101, spoof checking on, link-state auto, trust off, query_rss off
613
614
615Now test with dpdk
616
617::
618
619 ### build following package and copy to target server: bitbake bison;bitbake kernel-devsrc
620 root@oran-aio:~/sriov-network-device-plugin# rpm -ivh ~/bison-3.0.4-r0.corei7_64.rpm
621 root@oran-aio:~/sriov-network-device-plugin# rpm -ivh ~/kernel-devsrc-1.0-r0.intel_x86_64.rpm
622
623 root@oran-aio:~/sriov-network-device-plugin# kubectl exec -it $(kubectl get pods -o wide | grep dpdk | awk '{ print $1 }') -- /bin/bash
624 [root@dpdk-1g /]# export |grep INTEL
625 declare -x PCIDEVICE_INTEL_COM_INTEL_SRIOV_DPDK="0000:04:11.6,0000:04:11.5"
626
627 [root@dpdk-1g /]# yum -y install wget ncurses-devel unzip libpcap-devel ncurses-devel libedit-devel pciutils lua-devel
628
629 [root@dpdk-1g /]# cd /opt
630 [root@dpdk-1g /]# wget https://fast.dpdk.org/rel/dpdk-18.08.tar.xz
631 [root@dpdk-1g /]# tar xf dpdk-18.08.tar.xz
632 [root@dpdk-1g /]# cd dpdk-18.08/
633 [root@dpdk-1g /]# sed -i 's/CONFIG_RTE_EAL_IGB_UIO=y/CONFIG_RTE_EAL_IGB_UIO=n/g' config/common_linuxapp
634 [root@dpdk-1g /]# sed -i 's/CONFIG_RTE_LIBRTE_KNI=y/CONFIG_RTE_LIBRTE_KNI=n/g' config/common_linuxapp
635 [root@dpdk-1g /]# sed -i 's/CONFIG_RTE_KNI_KMOD=y/CONFIG_RTE_KNI_KMOD=n/g' config/common_linuxapp
636 [root@dpdk-1g /]# export RTE_SDK=/opt/dpdk-18.08
637 [root@dpdk-1g /]# export RTE_TARGET=x86_64-native-linuxapp-gcc
638 [root@dpdk-1g /]# export RTE_BIND=$RTE_SDK/usertools/dpdk-devbind.py
639 [root@dpdk-1g /]# make install T=$RTE_TARGET
640 [root@dpdk-1g /]# cd examples/helloworld
641 [root@dpdk-1g /]# make
642 [root@dpdk-1g /]# NR_hugepages=2
643 [root@dpdk-1g /]# ./build/helloworld -l 1-4 -n 4 -m $NR_hugepages
644 ...
645 hello from core 2
646 hello from core 3
647 hello from core 4
648 hello from core 1
649
650
651
6523.12 Deploy CMK (CPU-Manager-for-Kubernetes)
Jackie Huang799759c2019-11-13 15:45:51 +0800653''''''''''''''''''''''''''''''''''''''''''''
654
655Build the CMK docker image
656
657::
658
659 root@oran-aio:~# cd /opt/kubernetes_plugins/cpu-manager-for-kubernetes/
660 root@oran-aio:/opt/kubernetes_plugins/cpu-manager-for-kubernetes# make
661
662Verify that the cmk docker images is built successfully
663
664::
665
666 root@oran-aio:/opt/kubernetes_plugins/cpu-manager-for-kubernetes# docker images|grep cmk
667 cmk v1.3.1 3fec5f753b05 44 minutes ago 765MB
668
669Edit the template yaml file for your deployment:
670 - The template file is: /etc/kubernetes/plugins/cpu-manager-for-kubernetes/cmk-cluster-init-pod-template.yaml
671 - The options you may need to change:
672
673::
674
675 # You can change the value for the following env:
676 env:
677 - name: HOST_LIST
678 # Change this to modify the the host list to be initialized
679 value: "oran-aio"
680 - name: NUM_EXCLUSIVE_CORES
681 # Change this to modify the value passed to `--num-exclusive-cores` flag
682 value: "4"
683 - name: NUM_SHARED_CORES
684 # Change this to modify the value passed to `--num-shared-cores` flag
685 value: "1"
686 - name: CMK_IMG
687 # Change his ONLY if you built the docker images with a different tag name
688 value: "cmk:v1.3.1"
689
690Or you can also refer to `CMK operator manual`_
691
692.. _`CMK operator manual`: https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/operator.md
693
694
695Depoly CMK from yaml files
696
697::
698
699 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/cpu-manager-for-kubernetes/cmk-rbac-rules.yaml
700 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/cpu-manager-for-kubernetes/cmk-serviceaccount.yaml
701 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/cpu-manager-for-kubernetes/cmk-cluster-init-pod-template.yaml
702
703Verify that the cmk cluster init completed and the pods for nodereport and webhook deployment are up and running
704
705::
706
707 root@oran-aio:/opt/kubernetes_plugins/cpu-manager-for-kubernetes# kubectl get pods --all-namespaces |grep cmk
708 default cmk-cluster-init-pod 0/1 Completed 0 11m
709 default cmk-init-install-discover-pod-oran-aio 0/2 Completed 0 10m
710 default cmk-reconcile-nodereport-ds-oran-aio-qbdqb 2/2 Running 0 10m
711 default cmk-webhook-deployment-6f9dd7dfb6-2lj2p 1/1 Running 0 10m
712
713- For detail usage, please refer to `CMK user manual`_
714
Jackie Huang7ed35602019-11-14 16:53:51 +0800715.. _`CMK user manual`: https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/user.md
Jackie Huang799759c2019-11-13 15:45:51 +0800716
717References
718----------
719
720- `Flannel`_
721- `Doc for dashboard`_
722- `Multus-CNI quick start`_
723- `CMK operator manual`_
724- `CMK user manual`_
725
726.. _`Flannel`: https://github.com/coreos/flannel/blob/master/README.md