Nathan Skrzypczak | 9ad39c0 | 2021-08-19 11:38:06 +0200 | [diff] [blame] | 1 | Manual Installation |
| 2 | =================== |
| 3 | |
| 4 | This document describes how to clone the Contiv repository and then use |
| 5 | `kubeadm <https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/>`__ |
| 6 | to manually install Kubernetes with Contiv-VPP networking on one or more |
| 7 | bare metal or VM hosts. |
| 8 | |
| 9 | Clone the Contiv Repository |
| 10 | --------------------------- |
| 11 | |
| 12 | To clone the Contiv repository enter the following command: |
| 13 | |
| 14 | :: |
| 15 | |
| 16 | git clone https://github.com/contiv/vpp/<repository-name> |
| 17 | |
| 18 | **Note:** Replace ** with the name you want assigned to your cloned |
| 19 | contiv repository. |
| 20 | |
| 21 | The cloned repository has important folders that contain content that |
| 22 | are referenced in this Contiv documentation; those folders are noted |
| 23 | below: |
| 24 | |
| 25 | :: |
| 26 | |
| 27 | vpp-contiv2$ ls |
| 28 | build build-root doxygen gmod LICENSE Makefile RELEASE.md src |
| 29 | build-data docs extras INFO.yaml MAINTAINERS README.md sphinx_venv test |
| 30 | |
| 31 | Preparing Your Hosts |
| 32 | -------------------- |
| 33 | |
| 34 | Host-specific Configurations |
| 35 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 36 | |
| 37 | - **VmWare VMs**: the vmxnet3 driver is required on each interface that |
| 38 | will be used by VPP. Please see |
| 39 | `here <https://github.com/contiv/vpp/tree/master/docs/VMWARE_FUSION_HOST.md>`__ |
| 40 | for instructions how to install the vmxnet3 driver on VmWare Fusion. |
| 41 | |
| 42 | Setting up Network Adapter(s) |
| 43 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 44 | |
| 45 | Setting up DPDK |
| 46 | ^^^^^^^^^^^^^^^ |
| 47 | |
| 48 | DPDK setup must be completed **on each node** as follows: |
| 49 | |
| 50 | - Load the PCI UIO driver: |
| 51 | |
| 52 | :: |
| 53 | |
| 54 | $ sudo modprobe uio_pci_generic |
| 55 | |
| 56 | - Verify that the PCI UIO driver has loaded successfully: |
| 57 | |
| 58 | :: |
| 59 | |
| 60 | $ lsmod | grep uio |
| 61 | uio_pci_generic 16384 0 |
| 62 | uio 20480 1 uio_pci_generic |
| 63 | |
| 64 | Please note that this driver needs to be loaded upon each server |
| 65 | bootup, so you may want to add ``uio_pci_generic`` into the |
| 66 | ``/etc/modules`` file, or a file in the ``/etc/modules-load.d/`` |
| 67 | directory. For example, the ``/etc/modules`` file could look as |
| 68 | follows: |
| 69 | |
| 70 | :: |
| 71 | |
| 72 | # /etc/modules: kernel modules to load at boot time. |
| 73 | # |
| 74 | # This file contains the names of kernel modules that should be loaded |
| 75 | # at boot time, one per line. Lines beginning with "#" are ignored. |
| 76 | uio_pci_generic |
| 77 | |
| 78 | .. rubric:: Determining Network Adapter PCI Addresses |
| 79 | :name: determining-network-adapter-pci-addresses |
| 80 | |
| 81 | You need the PCI address of the network interface that VPP will use |
| 82 | for the multi-node pod interconnect. On Debian-based distributions, |
| 83 | you can use ``lshw``\ (*): |
| 84 | |
| 85 | :: |
| 86 | |
| 87 | $ sudo lshw -class network -businfo |
| 88 | Bus info Device Class Description |
| 89 | ==================================================== |
| 90 | pci@0000:00:03.0 ens3 network Virtio network device |
| 91 | pci@0000:00:04.0 ens4 network Virtio network device |
| 92 | |
| 93 | **Note:** On CentOS/RedHat/Fedora distributions, ``lshw`` may not be |
| 94 | available by default, install it by issuing the following command: |
| 95 | ``yum -y install lshw`` |
| 96 | |
| 97 | Configuring vswitch to Use Network Adapters |
| 98 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 99 | |
| 100 | Finally, you need to set up the vswitch to use the network adapters: |
| 101 | |
| 102 | - `Setup on a node with a single |
| 103 | NIC <https://github.com/contiv/vpp/tree/master/docs/SINGLE_NIC_SETUP.md>`__ |
| 104 | - `Setup a node with multiple |
| 105 | NICs <https://github.com/contiv/vpp/tree/master/docs/MULTI_NIC_SETUP.md>`__ |
| 106 | |
| 107 | Using a Node Setup Script |
| 108 | ~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 109 | |
| 110 | You can perform the above steps using the `node setup |
| 111 | script <https://github.com/contiv/vpp/tree/master/k8s/README.md#setup-node-sh>`__. |
| 112 | |
| 113 | Installing Kubernetes with Contiv-VPP CNI plugin |
| 114 | ------------------------------------------------ |
| 115 | |
| 116 | After the nodes you will be using in your K8s cluster are prepared, you |
| 117 | can install the cluster using |
| 118 | `kubeadm <https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/>`__. |
| 119 | |
| 120 | (1/4) Installing Kubeadm on Your Hosts |
| 121 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 122 | |
| 123 | For first-time installation, see `Installing |
| 124 | kubeadm <https://kubernetes.io/docs/setup/independent/install-kubeadm/>`__. |
| 125 | To update an existing installation, you should do a |
| 126 | ``apt-get update && apt-get upgrade`` or ``yum update`` to get the |
| 127 | latest version of kubeadm. |
| 128 | |
| 129 | On each host with multiple NICs where the NIC that will be used for |
| 130 | Kubernetes management traffic is not the one pointed to by the default |
| 131 | route out of the host, a `custom management |
| 132 | network <https://github.com/contiv/vpp/tree/master/docs/CUSTOM_MGMT_NETWORK.md>`__ |
| 133 | for Kubernetes must be configured. |
| 134 | |
| 135 | Using Kubernetes 1.10 and Above |
| 136 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 137 | |
| 138 | In K8s 1.10, support for huge pages in a pod has been introduced. For |
| 139 | now, this feature must be either disabled or memory limit must be |
| 140 | defined for vswitch container. |
| 141 | |
| 142 | To disable huge pages, perform the following steps as root: \* Using |
| 143 | your favorite editor, disable huge pages in the kubelet configuration |
| 144 | file (``/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`` or |
| 145 | ``/etc/default/kubelet`` for version 1.11+): |
| 146 | |
| 147 | :: |
| 148 | |
| 149 | Environment="KUBELET_EXTRA_ARGS=--feature-gates HugePages=false" |
| 150 | |
| 151 | - Restart the kubelet daemon: |
| 152 | |
| 153 | :: |
| 154 | |
| 155 | systemctl daemon-reload |
| 156 | systemctl restart kubelet |
| 157 | |
| 158 | To define memory limit, append the following snippet to vswitch |
| 159 | container in deployment yaml file: |
| 160 | |
| 161 | :: |
| 162 | |
| 163 | resources: |
| 164 | limits: |
| 165 | hugepages-2Mi: 1024Mi |
| 166 | memory: 1024Mi |
| 167 | |
| 168 | or set ``contiv.vswitch.defineMemoryLimits`` to ``true`` in `helm |
| 169 | values <https://github.com/contiv/vpp/blob/master/k8s/contiv-vpp/README.md>`__. |
| 170 | |
| 171 | (2/4) Initializing Your Master |
| 172 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 173 | |
| 174 | Before initializing the master, you may want to |
| 175 | `remove <#tearing-down-kubernetes>`__ any previously installed K8s |
| 176 | components. Then, proceed with master initialization as described in the |
| 177 | `kubeadm |
| 178 | manual <https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#initializing-your-master>`__. |
| 179 | Execute the following command as root: |
| 180 | |
| 181 | :: |
| 182 | |
| 183 | kubeadm init --token-ttl 0 --pod-network-cidr=10.1.0.0/16 |
| 184 | |
| 185 | **Note:** ``kubeadm init`` will autodetect the network interface to |
| 186 | advertise the master on as the interface with the default gateway. If |
| 187 | you want to use a different interface (i.e. a custom management network |
| 188 | setup), specify the ``--apiserver-advertise-address=<ip-address>`` |
| 189 | argument to kubeadm init. For example: |
| 190 | |
| 191 | :: |
| 192 | |
| 193 | kubeadm init --token-ttl 0 --pod-network-cidr=10.1.0.0/16 --apiserver-advertise-address=192.168.56.106 |
| 194 | |
| 195 | **Note:** The CIDR specified with the flag ``--pod-network-cidr`` is |
| 196 | used by kube-proxy, and it **must include** the ``PodSubnetCIDR`` from |
| 197 | the ``IPAMConfig`` section in the Contiv-vpp config map in Contiv-vpp’s |
| 198 | deployment file |
| 199 | `contiv-vpp.yaml <https://github.com/contiv/vpp/blob/master/k8s/contiv-vpp/values.yaml>`__. |
| 200 | Pods in the host network namespace are a special case; they share their |
| 201 | respective interfaces and IP addresses with the host. For proxying to |
| 202 | work properly it is therefore required for services with backends |
| 203 | running on the host to also **include the node management IP** within |
| 204 | the ``--pod-network-cidr`` subnet. For example, with the default |
| 205 | ``PodSubnetCIDR=10.1.0.0/16`` and ``PodIfIPCIDR=10.2.1.0/24``, the |
| 206 | subnet ``10.3.0.0/16`` could be allocated for the management network and |
| 207 | ``--pod-network-cidr`` could be defined as ``10.0.0.0/8``, so as to |
| 208 | include IP addresses of all pods in all network namespaces: |
| 209 | |
| 210 | :: |
| 211 | |
| 212 | kubeadm init --token-ttl 0 --pod-network-cidr=10.0.0.0/8 --apiserver-advertise-address=10.3.1.1 |
| 213 | |
| 214 | If Kubernetes was initialized successfully, it prints out this message: |
| 215 | |
| 216 | :: |
| 217 | |
| 218 | Your Kubernetes master has initialized successfully! |
| 219 | |
| 220 | After successful initialization, don’t forget to set up your .kube |
| 221 | directory as a regular user (as instructed by ``kubeadm``): |
| 222 | |
| 223 | .. code:: bash |
| 224 | |
| 225 | mkdir -p $HOME/.kube |
| 226 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config |
| 227 | sudo chown $(id -u):$(id -g) $HOME/.kube/config |
| 228 | |
| 229 | (3/4) Installing the Contiv-VPP Pod Network |
| 230 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 231 | |
| 232 | If you have already used the Contiv-VPP plugin before, you may need to |
| 233 | pull the most recent Docker images on each node: |
| 234 | |
| 235 | :: |
| 236 | |
| 237 | bash <(curl -s https://raw.githubusercontent.com/contiv/vpp/master/k8s/pull-images.sh) |
| 238 | |
| 239 | Install the Contiv-VPP network for your cluster as follows: |
| 240 | |
| 241 | - If you do not use the STN feature, install Contiv-vpp as follows: |
| 242 | |
| 243 | :: |
| 244 | |
| 245 | kubectl apply -f https://raw.githubusercontent.com/contiv/vpp/master/k8s/contiv-vpp.yaml |
| 246 | |
| 247 | - If you use the STN feature, download the ``contiv-vpp.yaml`` file: |
| 248 | |
| 249 | :: |
| 250 | |
| 251 | wget https://raw.githubusercontent.com/contiv/vpp/master/k8s/contiv-vpp.yaml |
| 252 | |
| 253 | Then edit the STN configuration as described |
| 254 | `here <https://github.com/contiv/vpp/tree/master/docs/SINGLE_NIC_SETUP.md#configuring-stn-in-contiv-vpp-k8s-deployment-files>`__. |
| 255 | Finally, create the Contiv-vpp deployment from the edited file: |
| 256 | |
| 257 | :: |
| 258 | |
| 259 | kubectl apply -f ./contiv-vpp.yaml |
| 260 | |
| 261 | Beware contiv-etcd data is persisted in ``/var/etcd`` by default. It has |
| 262 | to be cleaned up manually after ``kubeadm reset``. Otherwise outdated |
| 263 | data will be loaded by a subsequent deployment. |
| 264 | |
| 265 | You can also generate random subfolder, alternatively: |
| 266 | |
| 267 | :: |
| 268 | |
| 269 | curl --silent https://raw.githubusercontent.com/contiv/vpp/master/k8s/contiv-vpp.yaml | sed "s/\/var\/etcd\/contiv-data/\/var\/etcd\/contiv-data\/$RANDOM/g" | kubectl apply -f - |
| 270 | |
| 271 | Deployment Verification |
| 272 | ^^^^^^^^^^^^^^^^^^^^^^^ |
| 273 | |
| 274 | After some time, all contiv containers should enter the running state: |
| 275 | |
| 276 | :: |
| 277 | |
| 278 | root@cvpp:/home/jan# kubectl get pods -n kube-system -o wide | grep contiv |
| 279 | NAME READY STATUS RESTARTS AGE IP NODE |
| 280 | ... |
| 281 | contiv-etcd-gwc84 1/1 Running 0 14h 192.168.56.106 cvpp |
| 282 | contiv-ksr-5c2vk 1/1 Running 2 14h 192.168.56.106 cvpp |
| 283 | contiv-vswitch-l59nv 2/2 Running 0 14h 192.168.56.106 cvpp |
| 284 | |
| 285 | In particular, make sure that the Contiv-VPP pod IP addresses are the |
| 286 | same as the IP address specified in the |
| 287 | ``--apiserver-advertise-address=<ip-address>`` argument to kubeadm init. |
| 288 | |
| 289 | Verify that the VPP successfully grabbed the network interface specified |
| 290 | in the VPP startup config (``GigabitEthernet0/4/0`` in our case): |
| 291 | |
| 292 | :: |
| 293 | |
| 294 | $ sudo vppctl |
| 295 | vpp# sh inter |
| 296 | Name Idx State Counter Count |
| 297 | GigabitEthernet0/4/0 1 up rx packets 1294 |
| 298 | rx bytes 153850 |
| 299 | tx packets 512 |
| 300 | tx bytes 21896 |
| 301 | drops 962 |
| 302 | ip4 1032 |
| 303 | host-40df9b44c3d42f4 3 up rx packets 126601 |
| 304 | rx bytes 44628849 |
| 305 | tx packets 132155 |
| 306 | tx bytes 27205450 |
| 307 | drops 24 |
| 308 | ip4 126585 |
| 309 | ip6 16 |
| 310 | host-vppv2 2 up rx packets 132162 |
| 311 | rx bytes 27205824 |
| 312 | tx packets 126658 |
| 313 | tx bytes 44634963 |
| 314 | drops 15 |
| 315 | ip4 132147 |
| 316 | ip6 14 |
| 317 | local0 0 down |
| 318 | |
| 319 | You should also see the interface to kube-dns (``host-40df9b44c3d42f4``) |
| 320 | and to the node’s IP stack (``host-vppv2``). |
| 321 | |
| 322 | Master Isolation (Optional) |
| 323 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 324 | |
| 325 | By default, your cluster will not schedule pods on the master for |
| 326 | security reasons. If you want to be able to schedule pods on the master, |
| 327 | (e.g., for a single-machine Kubernetes cluster for development), then |
| 328 | run: |
| 329 | |
| 330 | :: |
| 331 | |
| 332 | kubectl taint nodes --all node-role.kubernetes.io/master- |
| 333 | |
| 334 | More details about installing the pod network can be found in the |
| 335 | `kubeadm |
| 336 | manual <https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network>`__. |
| 337 | |
| 338 | (4/4) Joining Your Nodes |
| 339 | ~~~~~~~~~~~~~~~~~~~~~~~~ |
| 340 | |
| 341 | To add a new node to your cluster, run as root the command that was |
| 342 | output by kubeadm init. For example: |
| 343 | |
| 344 | :: |
| 345 | |
| 346 | kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash> |
| 347 | |
| 348 | More details can be found int the `kubeadm |
| 349 | manual <https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#joining-your-nodes>`__. |
| 350 | |
| 351 | .. _deployment-verification-1: |
| 352 | |
| 353 | Deployment Verification |
| 354 | ^^^^^^^^^^^^^^^^^^^^^^^ |
| 355 | |
| 356 | After some time, all contiv containers should enter the running state: |
| 357 | |
| 358 | :: |
| 359 | |
| 360 | root@cvpp:/home/jan# kubectl get pods -n kube-system -o wide | grep contiv |
| 361 | NAME READY STATUS RESTARTS AGE IP NODE |
| 362 | contiv-etcd-gwc84 1/1 Running 0 14h 192.168.56.106 cvpp |
| 363 | contiv-ksr-5c2vk 1/1 Running 2 14h 192.168.56.106 cvpp |
| 364 | contiv-vswitch-h6759 2/2 Running 0 14h 192.168.56.105 cvpp-slave2 |
| 365 | contiv-vswitch-l59nv 2/2 Running 0 14h 192.168.56.106 cvpp |
| 366 | etcd-cvpp 1/1 Running 0 14h 192.168.56.106 cvpp |
| 367 | kube-apiserver-cvpp 1/1 Running 0 14h 192.168.56.106 cvpp |
| 368 | kube-controller-manager-cvpp 1/1 Running 0 14h 192.168.56.106 cvpp |
| 369 | kube-dns-545bc4bfd4-fr6j9 3/3 Running 0 14h 10.1.134.2 cvpp |
| 370 | kube-proxy-q8sv2 1/1 Running 0 14h 192.168.56.106 cvpp |
| 371 | kube-proxy-s8kv9 1/1 Running 0 14h 192.168.56.105 cvpp-slave2 |
| 372 | kube-scheduler-cvpp 1/1 Running 0 14h 192.168.56.106 cvpp |
| 373 | |
| 374 | In particular, verify that a vswitch pod and a kube-proxy pod is running |
| 375 | on each joined node, as shown above. |
| 376 | |
| 377 | On each joined node, verify that the VPP successfully grabbed the |
| 378 | network interface specified in the VPP startup config |
| 379 | (``GigabitEthernet0/4/0`` in our case): |
| 380 | |
| 381 | :: |
| 382 | |
| 383 | $ sudo vppctl |
| 384 | vpp# sh inter |
| 385 | Name Idx State Counter Count |
| 386 | GigabitEthernet0/4/0 1 up |
| 387 | ... |
| 388 | |
| 389 | From the vpp CLI on a joined node you can also ping kube-dns to verify |
| 390 | node-to-node connectivity. For example: |
| 391 | |
| 392 | :: |
| 393 | |
| 394 | vpp# ping 10.1.134.2 |
| 395 | 64 bytes from 10.1.134.2: icmp_seq=1 ttl=64 time=.1557 ms |
| 396 | 64 bytes from 10.1.134.2: icmp_seq=2 ttl=64 time=.1339 ms |
| 397 | 64 bytes from 10.1.134.2: icmp_seq=3 ttl=64 time=.1295 ms |
| 398 | 64 bytes from 10.1.134.2: icmp_seq=4 ttl=64 time=.1714 ms |
| 399 | 64 bytes from 10.1.134.2: icmp_seq=5 ttl=64 time=.1317 ms |
| 400 | |
| 401 | Statistics: 5 sent, 5 received, 0% packet loss |
| 402 | |
| 403 | Deploying Example Applications |
| 404 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 405 | |
| 406 | Simple Deployment |
| 407 | ^^^^^^^^^^^^^^^^^ |
| 408 | |
| 409 | You can go ahead and create a simple deployment: |
| 410 | |
| 411 | :: |
| 412 | |
| 413 | $ kubectl run nginx --image=nginx --replicas=2 |
| 414 | |
| 415 | Use ``kubectl describe pod`` to get the IP address of a pod, e.g.: |
| 416 | |
| 417 | :: |
| 418 | |
| 419 | $ kubectl describe pod nginx | grep IP |
| 420 | |
| 421 | You should see two ip addresses, for example: |
| 422 | |
| 423 | :: |
| 424 | |
| 425 | IP: 10.1.1.3 |
| 426 | IP: 10.1.1.4 |
| 427 | |
| 428 | You can check the pods’ connectivity in one of the following ways: \* |
| 429 | Connect to the VPP debug CLI and ping any pod: |
| 430 | |
| 431 | :: |
| 432 | |
| 433 | sudo vppctl |
| 434 | vpp# ping 10.1.1.3 |
| 435 | |
| 436 | - Start busybox and ping any pod: |
| 437 | |
| 438 | :: |
| 439 | |
| 440 | kubectl run busybox --rm -ti --image=busybox /bin/sh |
| 441 | If you don't see a command prompt, try pressing enter. |
| 442 | / # |
| 443 | / # ping 10.1.1.3 |
| 444 | |
| 445 | - You should be able to ping any pod from the host: |
| 446 | |
| 447 | :: |
| 448 | |
| 449 | ping 10.1.1.3 |
| 450 | |
| 451 | Deploying Pods on Different Nodes |
| 452 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 453 | |
| 454 | to enable pod deployment on the master, untaint the master first: |
| 455 | |
| 456 | :: |
| 457 | |
| 458 | kubectl taint nodes --all node-role.kubernetes.io/master- |
| 459 | |
| 460 | In order to verify inter-node pod connectivity, we need to tell |
| 461 | Kubernetes to deploy one pod on the master node and one POD on the |
| 462 | worker. For this, we can use node selectors. |
| 463 | |
| 464 | In your deployment YAMLs, add the ``nodeSelector`` sections that refer |
| 465 | to preferred node hostnames, e.g.: |
| 466 | |
| 467 | :: |
| 468 | |
| 469 | nodeSelector: |
| 470 | kubernetes.io/hostname: vm5 |
| 471 | |
| 472 | Example of whole JSONs: |
| 473 | |
| 474 | :: |
| 475 | |
| 476 | apiVersion: v1 |
| 477 | kind: Pod |
| 478 | metadata: |
| 479 | name: nginx1 |
| 480 | spec: |
| 481 | nodeSelector: |
| 482 | kubernetes.io/hostname: vm5 |
| 483 | containers: |
| 484 | - name: nginx |
| 485 | |
| 486 | : nginx |
| 487 | |
| 488 | :: |
| 489 | |
| 490 | apiVersion: v1 |
| 491 | kind: Pod |
| 492 | metadata: |
| 493 | name: nginx2 |
| 494 | spec: |
| 495 | nodeSelector: |
| 496 | kubernetes.io/hostname: vm6 |
| 497 | containers: |
| 498 | - name: nginx |
| 499 | image: nginx |
| 500 | |
| 501 | After deploying the JSONs, verify they were deployed on different hosts: |
| 502 | |
| 503 | :: |
| 504 | |
| 505 | $ kubectl get pods -o wide |
| 506 | NAME READY STATUS RESTARTS AGE IP NODE |
| 507 | nginx1 1/1 Running 0 13m 10.1.36.2 vm5 |
| 508 | nginx2 1/1 Running 0 13m 10.1.219.3 vm6 |
| 509 | |
| 510 | Now you can verify the connectivity to both nginx PODs from a busybox |
| 511 | POD: |
| 512 | |
| 513 | :: |
| 514 | |
| 515 | kubectl run busybox --rm -it --image=busybox /bin/sh |
| 516 | |
| 517 | / # wget 10.1.36.2 |
| 518 | Connecting to 10.1.36.2 (10.1.36.2:80) |
| 519 | index.html 100% |*******************************************************************************************************************************************************************| 612 0:00:00 ETA |
| 520 | |
| 521 | / # rm index.html |
| 522 | |
| 523 | / # wget 10.1.219.3 |
| 524 | Connecting to 10.1.219.3 (10.1.219.3:80) |
| 525 | index.html 100% |*******************************************************************************************************************************************************************| 612 0:00:00 ETA |
| 526 | |
| 527 | Uninstalling Contiv-VPP |
| 528 | ~~~~~~~~~~~~~~~~~~~~~~~ |
| 529 | |
| 530 | To uninstall the network plugin itself, use ``kubectl``: |
| 531 | |
| 532 | :: |
| 533 | |
| 534 | kubectl delete -f https://raw.githubusercontent.com/contiv/vpp/master/k8s/contiv-vpp.yaml |
| 535 | |
| 536 | Tearing down Kubernetes |
| 537 | ~~~~~~~~~~~~~~~~~~~~~~~ |
| 538 | |
| 539 | - First, drain the node and make sure that the node is empty before |
| 540 | shutting it down: |
| 541 | |
| 542 | :: |
| 543 | |
| 544 | kubectl drain <node name> --delete-local-data --force --ignore-daemonsets |
| 545 | kubectl delete node <node name> |
| 546 | |
| 547 | - Next, on the node being removed, reset all kubeadm installed state: |
| 548 | |
| 549 | :: |
| 550 | |
| 551 | rm -rf $HOME/.kube |
| 552 | sudo su |
| 553 | kubeadm reset |
| 554 | |
| 555 | - If you added environment variable definitions into |
| 556 | ``/etc/systemd/system/kubelet.service.d/10-kubeadm.conf``, this would |
| 557 | have been a process from the `Custom Management Network |
| 558 | file <https://github.com/contiv/vpp/blob/master/docs/CUSTOM_MGMT_NETWORK.md#setting-up-a-custom-management-network-on-multi-homed-nodes>`__, |
| 559 | then remove the definitions now. |
| 560 | |
| 561 | Troubleshooting |
| 562 | ~~~~~~~~~~~~~~~ |
| 563 | |
| 564 | Some of the issues that can occur during the installation are: |
| 565 | |
| 566 | - Forgetting to create and initialize the ``.kube`` directory in your |
| 567 | home directory (As instructed by ``kubeadm init --token-ttl 0``). |
| 568 | This can manifest itself as the following error: |
| 569 | |
| 570 | :: |
| 571 | |
| 572 | W1017 09:25:43.403159 2233 factory_object_mapping.go:423] Failed to download OpenAPI (Get https://192.168.209.128:6443/swagger-2.0.0.pb-v1: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")), falling back to swagger |
| 573 | Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") |
| 574 | |
| 575 | - Previous installation lingering on the file system. |
| 576 | ``'kubeadm init --token-ttl 0`` fails to initialize kubelet with one |
| 577 | or more of the following error messages: |
| 578 | |
| 579 | :: |
| 580 | |
| 581 | ... |
| 582 | [kubelet-check] It seems like the kubelet isn't running or healthy. |
| 583 | [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused. |
| 584 | ... |
| 585 | |
| 586 | If you run into any of the above issues, try to clean up and reinstall |
| 587 | as root: |
| 588 | |
| 589 | :: |
| 590 | |
| 591 | sudo su |
| 592 | rm -rf $HOME/.kube |
| 593 | kubeadm reset |
| 594 | kubeadm init --token-ttl 0 |
| 595 | rm -rf /var/etcd/contiv-data |
| 596 | rm -rf /var/bolt/bolt.db |
| 597 | |
| 598 | Contiv-specific kubeadm installation on Aarch64 |
| 599 | ----------------------------------------------- |
| 600 | |
| 601 | Supplemental instructions apply when using Contiv-VPP for Aarch64. Most |
| 602 | installation steps for Aarch64 are the same as that described earlier in |
| 603 | this chapter, so you should firstly read it before you start the |
| 604 | installation on Aarch64 platform. |
| 605 | |
| 606 | Use the `Aarch64-specific kubeadm install |
| 607 | instructions <https://github.com/contiv/vpp/blob/master/docs/arm64/MANUAL_INSTALL_ARM64.md>`__ |
| 608 | to manually install Kubernetes with Contiv-VPP networking on one or more |
| 609 | bare-metals of Aarch64 platform. |