blob: 035dd09b88e4ec56acef88b9ea2a7d5dcfe2ecd0 [file] [log] [blame]
Nathan Skrzypczak9ad39c02021-08-19 11:38:06 +02001Contiv-VPP Vagrant Installation
2===============================
3
4Prerequisites
5-------------
6
7The following items are prerequisites before installing vagrant: -
8Vagrant 2.0.1 or later - Hypervisors: - VirtualBox 5.2.8 or later -
9VMWare Fusion 10.1.0 or later or VmWare Workstation 14 - For VmWare
10Fusion, you will need the `Vagrant VmWare Fusion
11plugin <https://www.vagrantup.com/vmware/index.html>`__ - Laptop or
12server with at least 4 CPU cores and 16 Gig of RAM
13
14Creating / Shutting Down / Destroying the Cluster
15-------------------------------------------------
16
17This folder contains the Vagrant file that is used to create a single or
18multi-node Kubernetes cluster using Contiv-VPP as a Network Plugin.
19
20The folder is organized into two subfolders:
21
22- (config) - contains the files that share cluster information, which
23 are used during the provisioning stage (master IP address,
24 Certificates, hash-keys). **CAUTION:** Editing is not recommended!
25- (vagrant) - contains scripts that are used for creating, destroying,
26 rebooting and shutting down the VMs that host the K8s cluster.
27
28To create and run a K8s cluster with a *contiv-vpp CNI* plugin, run the
29``vagrant-start`` script, located in the `vagrant
30folder <https://github.com/contiv/vpp/tree/master/vagrant>`__. The
31``vagrant-start`` script prompts the user to select the number of worker
32nodes for the kubernetes cluster. Zero (0) worker nodes mean that a
33single-node cluster (with one kubernetes master node) will be deployed.
34
35Next, the user is prompted to select either the *production environment*
36or the *development environment*. Instructions on how to build the
37development *contiv/vpp-vswitch* image can be found below in the
38`development
39environment <#building-and-deploying-the-dev-contiv-vswitch-image>`__
40command section.
41
42The last option asks the user to select either *Without StealTheNIC* or
43*With StealTheNIC*. Using option *With StealTheNIC* has the plugin
44steal interfaces owned by Linux and uses their configuration in VPP.
45
46For the production environment, enter the following commands:
47
48::
49
50 | => ./vagrant-start
51 Please provide the number of workers for the Kubernetes cluster (0-50) or enter [Q/q] to exit: 1
52
53 Please choose Kubernetes environment:
54 1) Production
55 2) Development
56 3) Quit
57 --> 1
58 You chose Development environment
59
60 Please choose deployment scenario:
61 1) Without StealTheNIC
62 2) With StealTheNIC
63 3) Quit
64 --> 1
65 You chose deployment without StealTheNIC
66
67 Creating a production environment, without STN and 1 worker node(s)
68
69For the development environment, enter the following commands:
70
71::
72
73 | => ./vagrant-start
74 Please provide the number of workers for the Kubernetes cluster (0-50) or enter [Q/q] to exit: 1
75
76 Please choose Kubernetes environment:
77 1) Production
78 2) Development
79 3) Quit
80 --> 2
81 You chose Development environment
82
83 Please choose deployment scenario:
84 1) Without StealTheNIC
85 2) With StealTheNIC
86 3) Quit
87 --> 1
88 You chose deployment without StealTheNIC
89
90 Creating a development environment, without STN and 1 worker node(s)
91
92To destroy and clean-up the cluster, run the *vagrant-cleanup* script,
93located `inside the vagrant
94folder <https://github.com/contiv/vpp/tree/master/vagrant>`__:
95
96::
97
98 cd vagrant/
99 ./vagrant-cleanup
100
101To shutdown the cluster, run the *vagrant-shutdown* script, located
102`inside the vagrant
103folder <https://github.com/contiv/vpp/tree/master/vagrant>`__:
104
105::
106
107 cd vagrant/
108 ./vagrant-shutdown
109
110- To reboot the cluster, run the *vagrant-reload* script, located
111 `inside the vagrant
112 folder <https://github.com/contiv/vpp/tree/master/vagrant>`__:
113
114::
115
116 cd vagrant/
117 ./vagrant-reload
118
119- From a suspended state, or after a reboot of the host machine, the
120 cluster can be brought up by running the *vagrant-up* script.
121
122Building and Deploying the dev-contiv-vswitch Image
123---------------------------------------------------
124
125If you chose the optional development-environment-deployment option,
126then perform the following instructions on how to build a modified
127*contivvpp/vswitch* image:
128
129- Make sure changes in the code have been saved. From the k8s-master
130 node, build the new *contivvpp/vswitch* image (run as sudo):
131
132::
133
134 vagrant ssh k8s-master
135 cd /vagrant/config
136 sudo ./save-dev-image
137
138- The newly built *contivvpp/vswitch* image is now tagged as *latest*.
139 Verify the build with ``sudo docker images``; the *contivvpp/vswitch*
140 should have been created a few seconds ago. The new image with all
141 the changes must become available to all the nodes in the K8s
142 cluster. To make the changes available to all, load the docker image
143 into the running worker nodes (run as sudo):
144
145::
146
147 vagrant ssh k8s-worker1
148 cd /vagrant/config
149 sudo ./load-dev-image
150
151- Verify with ``sudo docker images``; the old *contivvpp/vswitch*
152 should now be tagged as ``<none>`` and the latest tagged
153 *contivvpp/vswitch* should have been created a few seconds ago.
154
155Exploring the Cluster
156---------------------
157
158Once the cluster is up, perform the following steps: - Log into the
159master:
160
161::
162
163 cd vagrant
164
165 vagrant ssh k8s-master
166
167 Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-21-generic x86_64)
168
169 * Documentation: https://help.ubuntu.com/
170 vagrant@k8s-master:~$
171
172- Verify the Kubernetes/Contiv-VPP installation. First, verify the
173 nodes in the cluster:
174
175::
176
177 vagrant@k8s-master:~$ kubectl get nodes -o wide
178
179 NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
180 k8s-master Ready master 22m v1.9.2 <none> Ubuntu 16.04 LTS 4.4.0-21-generic docker://17.12.0-ce
181 k8s-worker1 Ready <none> 15m v1.9.2 <none> Ubuntu 16.04 LTS 4.4.0-21-generic docker://17.12.0-ce
182
183- Next, verify that all pods are running correctly:
184
185::
186
187 vagrant@k8s-master:~$ kubectl get pods -n kube-system -o wide
188
189 NAME READY STATUS RESTARTS AGE IP NODE
190 contiv-etcd-2ngdc 1/1 Running 0 17m 192.169.1.10 k8s-master
191 contiv-ksr-x7gsq 1/1 Running 3 17m 192.169.1.10 k8s-master
192 contiv-vswitch-9bql6 2/2 Running 0 17m 192.169.1.10 k8s-master
193 contiv-vswitch-hpt2x 2/2 Running 0 10m 192.169.1.11 k8s-worker1
194 etcd-k8s-master 1/1 Running 0 16m 192.169.1.10 k8s-master
195 kube-apiserver-k8s-master 1/1 Running 0 16m 192.169.1.10 k8s-master
196 kube-controller-manager-k8s-master 1/1 Running 0 15m 192.169.1.10 k8s-master
197 kube-dns-6f4fd4bdf-62rv4 2/3 CrashLoopBackOff 14 17m 10.1.1.2 k8s-master
198 kube-proxy-bvr74 1/1 Running 0 10m 192.169.1.11 k8s-worker1
199 kube-proxy-v4fzq 1/1 Running 0 17m 192.169.1.10 k8s-master
200 kube-scheduler-k8s-master 1/1 Running 0 16m 192.169.1.10 k8s-master
201
202- If you want your pods to be scheduled on both the master and the
203 workers, you have to untaint the master node:
204
205::
206
207- Check VPP and its interfaces:
208
209::
210
211 vagrant@k8s-master:~$ sudo vppctl
212 _______ _ _ _____ ___
213 __/ __/ _ \ (_)__ | | / / _ \/ _ \
214 _/ _// // / / / _ \ | |/ / ___/ ___/
215 /_/ /____(_)_/\___/ |___/_/ /_/
216
217 vpp# sh interface
218 Name Idx State Counter Count
219 GigabitEthernet0/8/0 1 up rx packets 14
220 rx bytes 3906
221 tx packets 18
222 tx bytes 2128
223 drops 3
224 ip4 13
225 ...
226
227
228- Make sure that ``GigabitEthernet0/8/0`` is listed and that its status
229 is ``up``.
230
231- Next, create an example deployment of nginx pods:
232
233::
234
235 vagrant@k8s-master:~$ kubectl run nginx --image=nginx --replicas=2
236 deployment "nginx" created
237
238- Check the status of the deployment:
239
240::
241
242 vagrant@k8s-master:~$ kubectl get deploy -o wide
243
244 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
245 nginx 2 2 2 2 2h nginx nginx run=nginx
246
247- Verify that the pods in the deployment are up and running:
248
249::
250
251 vagrant@k8s-master:~$ kubectl get pods -o wide
252
253 NAME READY STATUS RESTARTS AGE IP NODE
254 nginx-8586cf59-6kx2m 1/1 Running 1 1h 10.1.2.3 k8s-worker1
255 nginx-8586cf59-j5vf9 1/1 Running 1 1h 10.1.2.2 k8s-worker1
256
257- Issue an HTTP GET request to a pod in the deployment:
258
259::
260
261 vagrant@k8s-master:~$ wget 10.1.2.2
262
263 --2018-01-19 12:34:08-- http://10.1.2.2/
264 Connecting to 10.1.2.2:80... connected.
265 HTTP request sent, awaiting response... 200 OK
266 Length: 612 [text/html]
267 Saving to: index.html.1
268
269 index.html.1 100%[=========================================>] 612 --.-KB/s in 0s
270
271 2018-01-19 12:34:08 (1.78 MB/s) - index.html.1 saved [612/612]
272
273How to SSH into k8s Worker Node
274~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
275
276To SSH into k8s Worker Node, perform the following steps:
277
278::
279
280 cd vagrant
281
282 vagrant status
283
284 vagrant ssh k8s-worker1