blob: e453dc2c35cb4935e3bb52889d1476a68f497dc4 [file] [log] [blame]
Tomáš Levora418db4d2019-01-30 13:17:50 +01001.. This work is licensed under a Creative Commons Attribution 4.0 International License.
2.. http://creativecommons.org/licenses/by/4.0
3.. Copyright 2019 Samsung Electronics Co., Ltd.
4
5OOM ONAP Offline Installer Package Build Guide
6=============================================================
7
8This document is describing procedure for building offline installer packages. It is supposed to be triggered on server with internet connectivity and will download all artifacts required for ONAP deployment based on our static lists. The server used for the procedure in this guide is preferred to be separate build server.
9
Michal Ptacek1d0c0e72019-04-05 06:39:31 +000010Procedure was completely tested on RHEL 7.6 as it’s tested target platform, however with small adaptations it should be applicable also for other platforms.
11Some discrepancies when Centos 7.6 is used are described below as well.
Tomáš Levora418db4d2019-01-30 13:17:50 +010012
13Part 1. Preparations
14--------------------
15
Michal Ptacek1d0c0e72019-04-05 06:39:31 +000016We assume that procedure is executed on RHEL 7.6 server with \~300G disc space, 16G+ RAM and internet connectivity
Tomáš Levora418db4d2019-01-30 13:17:50 +010017
18More-over following sw packages has to be installed:
19
20* for the Preparation (Part 1), the Download artifacts for offline installer (Part 2) and the application helm charts preparation and patching (Part 4)
21 - git
22 - wget
23
24* for the Download artifacts for offline installer (Part 2) only
25 - createrepo
Tomáš Levora1d902342019-02-05 10:01:43 +010026 - dpkg-dev
Tomáš Levora418db4d2019-01-30 13:17:50 +010027 - python2-pip
28
29* for the Download artifacts for offline installer (Part 2) and the Populate local nexus (Part 3)
30 - nodejs
31 - jq
32 - docker (exact version docker-ce-17.03.2)
33
34* for the Download artifacts for offline installer (Part 2) and for the Application helm charts preparation and patching (Part 4)
35 - patch
36
Tomáš Levora1d902342019-02-05 10:01:43 +010037* for the Populate local nexus (Part 3)
38 - twine
39
Michal Ptacek1d0c0e72019-04-05 06:39:31 +000040Configure repos for downloading all needed rpms for download/packaging tooling:
41
Tomáš Levora418db4d2019-01-30 13:17:50 +010042
43::
44
Michal Ptacek1d0c0e72019-04-05 06:39:31 +000045 ############
46 # RHEL 7.6 #
47 ############
48
Tomáš Levora418db4d2019-01-30 13:17:50 +010049 # Register server
50 subscription-manager register --username <rhel licence name> --password <password> --auto-attach
51
52 # enable epel for npm and jq
53 rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
54
55 # enable rhel-7-server-e4s-optional-rpms in /etc/yum.repos.d/redhat.repo
56
Michal Ptacek1d0c0e72019-04-05 06:39:31 +000057Alternatively
58
59::
60
61 ##############
62 # Centos 7.6 #
63 ##############
64
65 # enable epel repo for npm and jq
66 yum install -y epel-release
67
68Subsequent steps are the same on both platforms:
69
70::
71
Tomáš Levora418db4d2019-01-30 13:17:50 +010072 # install following packages
Michal Ptacek1d0c0e72019-04-05 06:39:31 +000073 yum install -y expect nodejs git wget createrepo python2-pip jq patch dpkg-dev
Tomáš Levora418db4d2019-01-30 13:17:50 +010074
Tomáš Levora1d902342019-02-05 10:01:43 +010075 pip install twine
76
Tomáš Levora418db4d2019-01-30 13:17:50 +010077 # install docker
78 curl https://releases.rancher.com/install-docker/17.03.sh | sh
79
80Then it is necessary to clone all installer and build related repositories and prepare the directory structure.
81
82::
83
84 # prepare the onap build directory structure
85 cd /tmp
Bartek Grzybowskic241f2f2019-03-14 09:38:52 +010086 git clone https://gerrit.onap.org/r/oom/offline-installer onap-offline
Tomáš Levora418db4d2019-01-30 13:17:50 +010087 cd onap-offline
88
89Part 2. Download artifacts for offline installer
90------------------------------------------------
91
Michal Ptacek1d0c0e72019-04-05 06:39:31 +000092.. note:: Skip this step if you have already all necessary resources and continue with Part 3. Populate local nexus
Tomáš Levora418db4d2019-01-30 13:17:50 +010093
94All artifacts should be downloaded by running the download script as follows:
95
96./build/download_offline_data_by_lists.sh <project>
97
98For example:
99
Michal Ptacek1d0c0e72019-04-05 06:39:31 +0000100::
101
102 # onap_3.0.0 for casablanca (sign-off 30/11/2018)
103 # onap_3.0.1 for casablanca maintenance release (sign-off 10/12/2018)
104
105 $ ./build/download_offline_data_by_lists.sh onap_3.0.1
Tomáš Levora418db4d2019-01-30 13:17:50 +0100106
107Download is as reliable as network connectivity to internet, it is highly recommended to run it in screen and save log file from this script execution for checking if all artifacts were successfully collected. Each start and end of script call should contain timestamp in console output. Downloading consists of 10 steps, which should be checked at the end one-by-one.
108
109**Verify:** *Please take a look on following comments to respective
110parts of download script*
111
112[Step 1/10 Download collected docker images]
113
114=> image download step is quite reliable and contain retry logic
115
116E.g
117
118::
119
120 == pkg #143 of 163 ==
121 rancher/etc-host-updater:v0.0.3
122 digest:sha256:bc156a5ae480d6d6d536aa454a9cc2a88385988617a388808b271e06dc309ce8
123 Error response from daemon: Get https://registry-1.docker.io/v2/rancher/etc-host-updater/manifests/v0.0.3: Get
124 https://auth.docker.io/token?scope=repository%3Arancher%2Fetc-host-updater%3Apull&service=registry.docker.io: net/http: TLS handshake timeout
125 WARNING [!]: warning Command docker -l error pull rancher/etc-host-updater:v0.0.3 failed.
126 Attempt: 2/5
127 INFO: info waiting 10s for another try...
128 v0.0.3: Pulling from rancher/etc-host-updater
129 b3e1c725a85f: Already exists
130 6a710864a9fc: Already exists
131 d0ac3b234321: Already exists
132 87f567b5cf58: Already exists
133 16914729cfd3: Already exists
134 83c2da5790af: Pulling fs layer
135 83c2da5790af: Verifying Checksum
136 83c2da5790af: Download complete
137 83c2da5790af: Pull complete
138
139[Step 2/10 Build own nginx image]
140
Michal Ptacek1d0c0e72019-04-05 06:39:31 +0000141=> there is no hardening in this step, if it fails it needs to be
142retriggered. It should end with
143
144::
145
146 Successfully built <id>
Tomáš Levora418db4d2019-01-30 13:17:50 +0100147
148[Step 3/10 Save docker images from docker cache to tarfiles]
149
150=> quite reliable, retry logic in place
151
152[Step 4/10 move infra related images to infra folder]
153
154=> should be safe, precondition is not failing step(3)
155
156[Step 5/10 Download git repos]
157
158=> potentially unsafe, no hardening in place. If it not download all git repos. It has to be executed again. Easiest way is probably to comment-out other steps in load script and run it again.
159
160E.g.
161
162::
163
164 Cloning into bare repository
165 'github.com/rancher/community-catalog.git'...
166 error: RPC failed; result=28, HTTP code = 0
167 fatal: The remote end hung up unexpectedly
168 Cloning into bare repository 'git.rancher.io/rancher-catalog.git'...
169 Cloning into bare repository
170 'gerrit.onap.org/r/testsuite/properties.git'...
171 Cloning into bare repository 'gerrit.onap.org/r/portal.git'...
172 Cloning into bare repository 'gerrit.onap.org/r/aaf/authz.git'...
173 Cloning into bare repository 'gerrit.onap.org/r/demo.git'...
174 Cloning into bare repository
175 'gerrit.onap.org/r/dmaap/messagerouter/messageservice.git'...
176 Cloning into bare repository 'gerrit.onap.org/r/so/docker-config.git'...
177
178[Step 6/10 Download http files]
179
180[Step 7/10 Download npm pkgs]
181
182[Step 8/10 Download bin tools]
183
184=> work quite reliably, If it not download all artifacts. Easiest way is probably to comment-out other steps in load script and run it again.
185
186[Step 9/10 Download rhel pkgs]
187
188=> this is the step which will work on rhel only, for other platform different packages has to be downloaded.
189
190Following is considered as sucessfull run of this part:
191
192::
193
194 Available: 1:net-snmp-devel-5.7.2-32.el7.i686 (rhel-7-server-rpms)
195 net-snmp-devel = 1:5.7.2-32.el7
196 Available: 1:net-snmp-devel-5.7.2-33.el7_5.2.i686 (rhel-7-server-rpms)
197 net-snmp-devel = 1:5.7.2-33.el7_5.2
198 Dependency resolution failed, some packages will not be downloaded.
199 No Presto metadata available for rhel-7-server-rpms
200 https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm:
201 [Errno 12\] Timeout on
202 https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm:
203 (28, 'Operation timed out after 30001 milliseconds with 0 out of 0 bytes
204 received')
205 Trying other mirror.
206 Spawning worker 0 with 230 pkgs
207 Spawning worker 1 with 230 pkgs
208 Spawning worker 2 with 230 pkgs
209 Spawning worker 3 with 230 pkgs
210 Spawning worker 4 with 229 pkgs
211 Spawning worker 5 with 229 pkgs
212 Spawning worker 6 with 229 pkgs
213 Spawning worker 7 with 229 pkgs
214 Workers Finished
215 Saving Primary metadata
216 Saving file lists metadata
217 Saving other metadata
218 Generating sqlite DBs
219 Sqlite DBs complete
220
221[Step 10/10 Download sdnc-ansible-server packages]
222
223=> there is again no retry logic in this part, it is collecting packages for sdnc-ansible-server in the exactly same way how that container is doing it, however there is a bug in upstream that image in place will not work with those packages as old ones are not available and newer are not compatible with other stuff inside that image
224
225Part 3. Populate local nexus
226----------------------------
227
228Prerequisites:
229
230- All data lists and resources which are pushed to local nexus repository are available
231- Following ports are not occupied buy another service: 80, 8081, 8082, 10001
232- There's no docker container called "nexus"
233
Michal Ptacek1d0c0e72019-04-05 06:39:31 +0000234.. note:: In case you skipped the Part 2 for the artifacts download, please ensure that the copy of resources data are untarred in *./install/onap-offline/resources/*
Tomáš Levora418db4d2019-01-30 13:17:50 +0100235
236Whole nexus blob data tarball will be created by running script
237build\_nexus\_blob.sh. It will load the listed docker images, run the
238Nexus, configure it as npm and docker repository. Then it will push all
239listed npm packages and docker images to the repositories. After all is
240done the repository container is stopped and from the nexus-data
241directory is created tarball.
242
243There are mandatory parameters need to be set in configuration file:
244
245+------------------------------+------------------------------------------------------------------------------------------+
246| Parameter | Description |
247+==============================+==========================================================================================+
248| NXS\_SRC\_DOCKER\_IMG\_DIR | resource directory of docker images |
249+------------------------------+------------------------------------------------------------------------------------------+
250| NXS\_SRC\_NPM\_DIR | resource directory of npm packages |
251+------------------------------+------------------------------------------------------------------------------------------+
Tomáš Levoraf3491542019-02-20 12:59:14 +0100252| NXS\_SRC\_PYPI\_DIR | resource directory of npm packages |
Tomáš Levora1d902342019-02-05 10:01:43 +0100253+------------------------------+------------------------------------------------------------------------------------------+
Tomáš Levora418db4d2019-01-30 13:17:50 +0100254| NXS\_DOCKER\_IMG\_LIST | list of docker images to be pushed to Nexus repository |
255+------------------------------+------------------------------------------------------------------------------------------+
256| NXS\_DOCKER\_WO\_LIST | list of docker images which uses default repository |
257+------------------------------+------------------------------------------------------------------------------------------+
258| NXS\_NPM\_LIST | list of npm packages to be published to Nexus repository |
Tomáš Levoraf3491542019-02-20 12:59:14 +0100259+------------------------------+------------------------------------------------------------------------------------------+
Tomáš Levora1d902342019-02-05 10:01:43 +0100260| NXS\_PYPI\_LIST | list of pypi packages to be published to Nexus repository |
Tomáš Levora418db4d2019-01-30 13:17:50 +0100261+------------------------------+------------------------------------------------------------------------------------------+
262| NEXUS\_DATA\_TAR | target tarball of Nexus data path/name |
263+------------------------------+------------------------------------------------------------------------------------------+
264| NEXUS\_DATA\_DIR | directory used for the Nexus blob build |
265+------------------------------+------------------------------------------------------------------------------------------+
266| NEXUS\_IMAGE | Sonatype/Nexus3 docker image which will be used for data blob creation for this script |
267+------------------------------+------------------------------------------------------------------------------------------+
268
269Some of the docker images using default registry requires special
270treatment (e.g. they use different ports or SSL connection), therefore
271there is the list NXS\_DOCKER\_WO\_LIST by which are the images retagged
272to be able to push them to our nexus repository.
Michal Ptacek1d0c0e72019-04-05 06:39:31 +0000273Following steps can be used to split *docker_images.list* into files for
274NXS_DOCKER_IMG_LIST & NXS_DOCKER_WO_LIST variables.
Tomáš Levora418db4d2019-01-30 13:17:50 +0100275
Michal Ptacek1d0c0e72019-04-05 06:39:31 +0000276e.g.
277
278::
279
280 sed -n '/\.[^/].*\//p' onap_3.0.1-docker_images.list > /tmp/onap-me-data_lists/docker_img.list
281 sed -n '/\.[^/].*\//!p' onap_3.0.1-docker_images.list > /tmp/onap-me-data_lists/docker_no_registry.list
282
283.. note:: It's recomended to use abolute paths in the configuration file for the current script
Tomáš Levora418db4d2019-01-30 13:17:50 +0100284
285Example of the configuration file:
286
287::
288
289 NXS_SRC_DOCKER_IMG_DIR="/tmp/onap-offline/resources/offline_data/docker_images_for_nexus"
290 NXS_SRC_NPM_DIR="/tmp/onap-offline/resources/offline_data/npm_tar"
291 NXS_DOCKER_IMG_LIST="/tmp/onap-me-data_lists/docker_img.list"
292 NXS_DOCKER_WO_LIST="/tmp/onap-me-data_lists/docker_no_registry.list"
293 NXS_NPM_LIST="/tmp/onap-offline/bash/tools/data_list/npm_list.txt"
Tomáš Levora1d902342019-02-05 10:01:43 +0100294 NXS_SRC_PYPI_DIR="/tmp/onap-offline/resources/offline_data/pypi"
295 NXS_DOCKER_IMG_LIST="/tmp/onap-me-data_lists/docker_img.list"
296 NXS_DOCKER_WO_LIST="/tmp/onap-me-data_lists/docker_no_registry.list"
297 NXS_NPM_LIST="/tmp/onap-offline/bash/tools/data_list/onap_3.0.0-npm.list"
Tomáš Levora418db4d2019-01-30 13:17:50 +0100298 NEXUS_DATA_TAR="/root/nexus_data.tar"
299 NEXUS_DATA_DIR="/tmp/onap-offline/resources/nexus_data"
300 NEXUS_IMAGE="/tmp/onap-offline/resources/offline_data/docker_images_infra/sonatype_nexus3_latest.tar"
301
302Once everything is ready you can run the script as following example:
303
304``$ ./install/onap-offline/build_nexus_blob.sh /root/nexus_build.conf``
305
306Where the nexus\_build.conf is the configuration file and the
307/root/nexus\_data.tar is the destination tarball
308
Michal Ptacek1d0c0e72019-04-05 06:39:31 +0000309.. note:: Move, link or mount the NEXUS\_DATA\_DIR to the resources directory if there was different directory specified in configuration or use the resulting nexus\_data.tar for movement between machines.
Tomáš Levora418db4d2019-01-30 13:17:50 +0100310
311Once the Nexus data blob is created, the docker images and npm packages
312can be deleted to reduce the package size as they won't be needed in the
313installation time:
314
315E.g.
316
317::
318
319 rm -f /tmp/onap-offline/resources/offline_data/docker_images_for_nexus/*
320 rm -rf /tmp/onap-offline/resources/offline_data/npm_tar
321
322Part 4. Application helm charts preparation and patching
323--------------------------------------------------------
324
325This is about to clone oom repository and patch it to be able to use it
326offline. Use the following command:
327
Michal Ptacek1d0c0e72019-04-05 06:39:31 +0000328::
329
330 ./build/fetch\_and\_patch\_charts.sh <helm charts repo> <commit/tag/branch> <patchfile> <target\_dir>
Tomáš Levora418db4d2019-01-30 13:17:50 +0100331
332For example:
333
Michal Ptacek1d0c0e72019-04-05 06:39:31 +0000334::
335
336 ./build/fetch_and_patch_charts.sh https://gerrit.onap.org/r/oom 3.0.0-ONAP /tmp/offline-installer/patches/casablanca.patch /tmp/oom-clone
Tomáš Levora418db4d2019-01-30 13:17:50 +0100337
338Part 5. Creating offline installation package
339---------------------------------------------
340
341For the packagin itself it's necessary to prepare configuration. You can
Samuli Silviusf3eee9e2019-02-10 13:24:03 +0200342use ./build/package.conf as template or
Tomáš Levora418db4d2019-01-30 13:17:50 +0100343directly modify it.
344
Samuli Silvius426e6c02019-02-06 11:25:01 +0200345There are some parameters needs to be set in configuration file.
Samuli Silviusf3eee9e2019-02-10 13:24:03 +0200346Example values below are setup according to steps done in this guide to package ONAP.
Tomáš Levora418db4d2019-01-30 13:17:50 +0100347
348+---------------------------------------+------------------------------------------------------------------------------+
349| Parameter | Description |
350+=======================================+==============================================================================+
Samuli Silvius426e6c02019-02-06 11:25:01 +0200351| HELM\_CHARTS\_DIR | directory with Helm charts for the application |
Tomáš Levoraf3491542019-02-20 12:59:14 +0100352| | |
Samuli Silviusf3eee9e2019-02-10 13:24:03 +0200353| | Example: /tmp/oom-clone/kubernetes |
Tomáš Levora418db4d2019-01-30 13:17:50 +0100354+---------------------------------------+------------------------------------------------------------------------------+
Samuli Silvius426e6c02019-02-06 11:25:01 +0200355| APP\_CONFIGURATION | application install configuration (application_configuration.yml) for |
Samuli Silviusf3eee9e2019-02-10 13:24:03 +0200356| | ansible installer and custom ansible role code directories if any. |
Tomáš Levoraf3491542019-02-20 12:59:14 +0100357| | |
358| | Example:: |
359| | |
360| | APP_CONFIGURATION=( |
361| | /tmp/offline-installer/config/application_configuration.yml |
362| | /tmp/offline-installer/patches/onap-casablanca-patch-role |
363| | ) |
364| | |
Tomáš Levora418db4d2019-01-30 13:17:50 +0100365+---------------------------------------+------------------------------------------------------------------------------+
Samuli Silvius426e6c02019-02-06 11:25:01 +0200366| APP\_BINARY\_RESOURCES\_DIR | directory with all (binary) resources for offline infra and application |
Tomáš Levoraf3491542019-02-20 12:59:14 +0100367| | |
Samuli Silviusf3eee9e2019-02-10 13:24:03 +0200368| | Example: /tmp/onap-offline/resources |
Tomáš Levora418db4d2019-01-30 13:17:50 +0100369+---------------------------------------+------------------------------------------------------------------------------+
Samuli Silvius426e6c02019-02-06 11:25:01 +0200370| APP\_AUX\_BINARIES | additional binaries such as docker images loaded during runtime [optional] |
Tomáš Levora418db4d2019-01-30 13:17:50 +0100371+---------------------------------------+------------------------------------------------------------------------------+
372
373Offline installer packages are created with prepopulated data via
374following command run from offline-installer directory
375
Michal Ptacek1d0c0e72019-04-05 06:39:31 +0000376::
377
378 ./build/package.sh <project> <version> <packaging target directory>
Tomáš Levora418db4d2019-01-30 13:17:50 +0100379
380E.g.
381
Michal Ptacek1d0c0e72019-04-05 06:39:31 +0000382::
383
384 ./build/package.sh onap 1.0.1 /tmp/package
Tomáš Levora418db4d2019-01-30 13:17:50 +0100385
386
387So in the target directory you should find tar files with
388
Michal Ptacek1d0c0e72019-04-05 06:39:31 +0000389::
Tomáš Levora418db4d2019-01-30 13:17:50 +0100390
Michal Ptacek1d0c0e72019-04-05 06:39:31 +0000391 offline-<PROJECT\_NAME>-<PROJECT\_VERSION>-sw.tar
392 offline-<PROJECT\_NAME>-<PROJECT\_VERSION>-resources.tar
393 offline-<PROJECT\_NAME>-<PROJECT\_VERSION>-aux-resources.tar