blob: 4cf8d97ce9013b7c2718c2a196671780341f9703 [file] [log] [blame]
Tomáš Levora418db4d2019-01-30 13:17:50 +01001.. This work is licensed under a Creative Commons Attribution 4.0 International License.
2.. http://creativecommons.org/licenses/by/4.0
3.. Copyright 2019 Samsung Electronics Co., Ltd.
4
5OOM ONAP Offline Installer Package Build Guide
6=============================================================
7
8This document is describing procedure for building offline installer packages. It is supposed to be triggered on server with internet connectivity and will download all artifacts required for ONAP deployment based on our static lists. The server used for the procedure in this guide is preferred to be separate build server.
9
10Procedure was completely tested on RHEL 7.4 as it’s tested target platform, however with small adaptations it should be applicable also for other platforms.
11
12Part 1. Preparations
13--------------------
14
15We assume that procedure is executed on RHEL 7.4 server with \~300G disc space, 16G+ RAM and internet connectivity
16
17More-over following sw packages has to be installed:
18
19* for the Preparation (Part 1), the Download artifacts for offline installer (Part 2) and the application helm charts preparation and patching (Part 4)
20 - git
21 - wget
22
23* for the Download artifacts for offline installer (Part 2) only
24 - createrepo
Tomáš Levora1d902342019-02-05 10:01:43 +010025 - dpkg-dev
Tomáš Levora418db4d2019-01-30 13:17:50 +010026 - python2-pip
27
28* for the Download artifacts for offline installer (Part 2) and the Populate local nexus (Part 3)
29 - nodejs
30 - jq
31 - docker (exact version docker-ce-17.03.2)
32
33* for the Download artifacts for offline installer (Part 2) and for the Application helm charts preparation and patching (Part 4)
34 - patch
35
Tomáš Levora1d902342019-02-05 10:01:43 +010036* for the Populate local nexus (Part 3)
37 - twine
38
Tomáš Levora418db4d2019-01-30 13:17:50 +010039This can be achieved by following commands:
40
41::
42
43 # Register server
44 subscription-manager register --username <rhel licence name> --password <password> --auto-attach
45
46 # enable epel for npm and jq
47 rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
48
49 # enable rhel-7-server-e4s-optional-rpms in /etc/yum.repos.d/redhat.repo
50
51 # install following packages
52 yum install -y expect nodejs git wget createrepo python2-pip jq patch
53
Tomáš Levora1d902342019-02-05 10:01:43 +010054 pip install twine
55
Tomáš Levora418db4d2019-01-30 13:17:50 +010056 # install docker
57 curl https://releases.rancher.com/install-docker/17.03.sh | sh
58
59Then it is necessary to clone all installer and build related repositories and prepare the directory structure.
60
61::
62
63 # prepare the onap build directory structure
64 cd /tmp
65 git clone -b casablanca https://gerrit.onap.org/r/oom/offline-installer
66 cd onap-offline
67
68Part 2. Download artifacts for offline installer
69------------------------------------------------
70
71**Note: Skip this step if you have already all necessary resources and continue with Part 3. Populate local nexus**
72
73All artifacts should be downloaded by running the download script as follows:
74
75./build/download_offline_data_by_lists.sh <project>
76
77For example:
78
79``$ ./build/download_offline_data_by_lists.sh onap_3.0.0``
80
81Download is as reliable as network connectivity to internet, it is highly recommended to run it in screen and save log file from this script execution for checking if all artifacts were successfully collected. Each start and end of script call should contain timestamp in console output. Downloading consists of 10 steps, which should be checked at the end one-by-one.
82
83**Verify:** *Please take a look on following comments to respective
84parts of download script*
85
86[Step 1/10 Download collected docker images]
87
88=> image download step is quite reliable and contain retry logic
89
90E.g
91
92::
93
94 == pkg #143 of 163 ==
95 rancher/etc-host-updater:v0.0.3
96 digest:sha256:bc156a5ae480d6d6d536aa454a9cc2a88385988617a388808b271e06dc309ce8
97 Error response from daemon: Get https://registry-1.docker.io/v2/rancher/etc-host-updater/manifests/v0.0.3: Get
98 https://auth.docker.io/token?scope=repository%3Arancher%2Fetc-host-updater%3Apull&service=registry.docker.io: net/http: TLS handshake timeout
99 WARNING [!]: warning Command docker -l error pull rancher/etc-host-updater:v0.0.3 failed.
100 Attempt: 2/5
101 INFO: info waiting 10s for another try...
102 v0.0.3: Pulling from rancher/etc-host-updater
103 b3e1c725a85f: Already exists
104 6a710864a9fc: Already exists
105 d0ac3b234321: Already exists
106 87f567b5cf58: Already exists
107 16914729cfd3: Already exists
108 83c2da5790af: Pulling fs layer
109 83c2da5790af: Verifying Checksum
110 83c2da5790af: Download complete
111 83c2da5790af: Pull complete
112
113[Step 2/10 Build own nginx image]
114
115=> there is no hardening in this step, if it failed needs to be
116retriggered. It should end with **Successfully built <id>**
117
118[Step 3/10 Save docker images from docker cache to tarfiles]
119
120=> quite reliable, retry logic in place
121
122[Step 4/10 move infra related images to infra folder]
123
124=> should be safe, precondition is not failing step(3)
125
126[Step 5/10 Download git repos]
127
128=> potentially unsafe, no hardening in place. If it not download all git repos. It has to be executed again. Easiest way is probably to comment-out other steps in load script and run it again.
129
130E.g.
131
132::
133
134 Cloning into bare repository
135 'github.com/rancher/community-catalog.git'...
136 error: RPC failed; result=28, HTTP code = 0
137 fatal: The remote end hung up unexpectedly
138 Cloning into bare repository 'git.rancher.io/rancher-catalog.git'...
139 Cloning into bare repository
140 'gerrit.onap.org/r/testsuite/properties.git'...
141 Cloning into bare repository 'gerrit.onap.org/r/portal.git'...
142 Cloning into bare repository 'gerrit.onap.org/r/aaf/authz.git'...
143 Cloning into bare repository 'gerrit.onap.org/r/demo.git'...
144 Cloning into bare repository
145 'gerrit.onap.org/r/dmaap/messagerouter/messageservice.git'...
146 Cloning into bare repository 'gerrit.onap.org/r/so/docker-config.git'...
147
148[Step 6/10 Download http files]
149
150[Step 7/10 Download npm pkgs]
151
152[Step 8/10 Download bin tools]
153
154=> work quite reliably, If it not download all artifacts. Easiest way is probably to comment-out other steps in load script and run it again.
155
156[Step 9/10 Download rhel pkgs]
157
158=> this is the step which will work on rhel only, for other platform different packages has to be downloaded.
159
160Following is considered as sucessfull run of this part:
161
162::
163
164 Available: 1:net-snmp-devel-5.7.2-32.el7.i686 (rhel-7-server-rpms)
165 net-snmp-devel = 1:5.7.2-32.el7
166 Available: 1:net-snmp-devel-5.7.2-33.el7_5.2.i686 (rhel-7-server-rpms)
167 net-snmp-devel = 1:5.7.2-33.el7_5.2
168 Dependency resolution failed, some packages will not be downloaded.
169 No Presto metadata available for rhel-7-server-rpms
170 https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm:
171 [Errno 12\] Timeout on
172 https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm:
173 (28, 'Operation timed out after 30001 milliseconds with 0 out of 0 bytes
174 received')
175 Trying other mirror.
176 Spawning worker 0 with 230 pkgs
177 Spawning worker 1 with 230 pkgs
178 Spawning worker 2 with 230 pkgs
179 Spawning worker 3 with 230 pkgs
180 Spawning worker 4 with 229 pkgs
181 Spawning worker 5 with 229 pkgs
182 Spawning worker 6 with 229 pkgs
183 Spawning worker 7 with 229 pkgs
184 Workers Finished
185 Saving Primary metadata
186 Saving file lists metadata
187 Saving other metadata
188 Generating sqlite DBs
189 Sqlite DBs complete
190
191[Step 10/10 Download sdnc-ansible-server packages]
192
193=> there is again no retry logic in this part, it is collecting packages for sdnc-ansible-server in the exactly same way how that container is doing it, however there is a bug in upstream that image in place will not work with those packages as old ones are not available and newer are not compatible with other stuff inside that image
194
195Part 3. Populate local nexus
196----------------------------
197
198Prerequisites:
199
200- All data lists and resources which are pushed to local nexus repository are available
201- Following ports are not occupied buy another service: 80, 8081, 8082, 10001
202- There's no docker container called "nexus"
203
204**Note: In case you skipped the Part 2 for the artifacts download,
205please ensure that the copy of resources data are untarred in
206./install/onap-offline/resources/**
207
208Whole nexus blob data tarball will be created by running script
209build\_nexus\_blob.sh. It will load the listed docker images, run the
210Nexus, configure it as npm and docker repository. Then it will push all
211listed npm packages and docker images to the repositories. After all is
212done the repository container is stopped and from the nexus-data
213directory is created tarball.
214
215There are mandatory parameters need to be set in configuration file:
216
217+------------------------------+------------------------------------------------------------------------------------------+
218| Parameter | Description |
219+==============================+==========================================================================================+
220| NXS\_SRC\_DOCKER\_IMG\_DIR | resource directory of docker images |
221+------------------------------+------------------------------------------------------------------------------------------+
222| NXS\_SRC\_NPM\_DIR | resource directory of npm packages |
223+------------------------------+------------------------------------------------------------------------------------------+
Tomáš Levoraf3491542019-02-20 12:59:14 +0100224| NXS\_SRC\_PYPI\_DIR | resource directory of npm packages |
Tomáš Levora1d902342019-02-05 10:01:43 +0100225+------------------------------+------------------------------------------------------------------------------------------+
Tomáš Levora418db4d2019-01-30 13:17:50 +0100226| NXS\_DOCKER\_IMG\_LIST | list of docker images to be pushed to Nexus repository |
227+------------------------------+------------------------------------------------------------------------------------------+
228| NXS\_DOCKER\_WO\_LIST | list of docker images which uses default repository |
229+------------------------------+------------------------------------------------------------------------------------------+
230| NXS\_NPM\_LIST | list of npm packages to be published to Nexus repository |
Tomáš Levoraf3491542019-02-20 12:59:14 +0100231+------------------------------+------------------------------------------------------------------------------------------+
Tomáš Levora1d902342019-02-05 10:01:43 +0100232| NXS\_PYPI\_LIST | list of pypi packages to be published to Nexus repository |
Tomáš Levora418db4d2019-01-30 13:17:50 +0100233+------------------------------+------------------------------------------------------------------------------------------+
234| NEXUS\_DATA\_TAR | target tarball of Nexus data path/name |
235+------------------------------+------------------------------------------------------------------------------------------+
236| NEXUS\_DATA\_DIR | directory used for the Nexus blob build |
237+------------------------------+------------------------------------------------------------------------------------------+
238| NEXUS\_IMAGE | Sonatype/Nexus3 docker image which will be used for data blob creation for this script |
239+------------------------------+------------------------------------------------------------------------------------------+
240
241Some of the docker images using default registry requires special
242treatment (e.g. they use different ports or SSL connection), therefore
243there is the list NXS\_DOCKER\_WO\_LIST by which are the images retagged
244to be able to push them to our nexus repository.
245
246**Note: It's recomended to use abolute paths in the configuration file
247for the current script**
248
249Example of the configuration file:
250
251::
252
253 NXS_SRC_DOCKER_IMG_DIR="/tmp/onap-offline/resources/offline_data/docker_images_for_nexus"
254 NXS_SRC_NPM_DIR="/tmp/onap-offline/resources/offline_data/npm_tar"
255 NXS_DOCKER_IMG_LIST="/tmp/onap-me-data_lists/docker_img.list"
256 NXS_DOCKER_WO_LIST="/tmp/onap-me-data_lists/docker_no_registry.list"
257 NXS_NPM_LIST="/tmp/onap-offline/bash/tools/data_list/npm_list.txt"
Tomáš Levora1d902342019-02-05 10:01:43 +0100258 NXS_SRC_PYPI_DIR="/tmp/onap-offline/resources/offline_data/pypi"
259 NXS_DOCKER_IMG_LIST="/tmp/onap-me-data_lists/docker_img.list"
260 NXS_DOCKER_WO_LIST="/tmp/onap-me-data_lists/docker_no_registry.list"
261 NXS_NPM_LIST="/tmp/onap-offline/bash/tools/data_list/onap_3.0.0-npm.list"
Tomáš Levora418db4d2019-01-30 13:17:50 +0100262 NEXUS_DATA_TAR="/root/nexus_data.tar"
263 NEXUS_DATA_DIR="/tmp/onap-offline/resources/nexus_data"
264 NEXUS_IMAGE="/tmp/onap-offline/resources/offline_data/docker_images_infra/sonatype_nexus3_latest.tar"
265
266Once everything is ready you can run the script as following example:
267
268``$ ./install/onap-offline/build_nexus_blob.sh /root/nexus_build.conf``
269
270Where the nexus\_build.conf is the configuration file and the
271/root/nexus\_data.tar is the destination tarball
272
273**Note: Move, link or mount the NEXUS\_DATA\_DIR to the resources
274directory if there was different directory specified in configuration or
275use the resulting nexus\_data.tar for movement between machines.**
276
277Once the Nexus data blob is created, the docker images and npm packages
278can be deleted to reduce the package size as they won't be needed in the
279installation time:
280
281E.g.
282
283::
284
285 rm -f /tmp/onap-offline/resources/offline_data/docker_images_for_nexus/*
286 rm -rf /tmp/onap-offline/resources/offline_data/npm_tar
287
288Part 4. Application helm charts preparation and patching
289--------------------------------------------------------
290
291This is about to clone oom repository and patch it to be able to use it
292offline. Use the following command:
293
294./build/fetch\_and\_patch\_charts.sh <helm charts repo>
295<commit/tag/branch> <patchfile> <target\_dir>
296
297For example:
298
Samuli Silviusf3eee9e2019-02-10 13:24:03 +0200299``$ ./build/fetch_and_patch_charts.sh https://gerrit.onap.org/r/oom 3.0.0-ONAP /tmp/offline-installer/patches/casablanca_3.0.0.patch /tmp/oom-clone``
Tomáš Levora418db4d2019-01-30 13:17:50 +0100300
301Part 5. Creating offline installation package
302---------------------------------------------
303
304For the packagin itself it's necessary to prepare configuration. You can
Samuli Silviusf3eee9e2019-02-10 13:24:03 +0200305use ./build/package.conf as template or
Tomáš Levora418db4d2019-01-30 13:17:50 +0100306directly modify it.
307
Samuli Silvius426e6c02019-02-06 11:25:01 +0200308There are some parameters needs to be set in configuration file.
Samuli Silviusf3eee9e2019-02-10 13:24:03 +0200309Example values below are setup according to steps done in this guide to package ONAP.
Tomáš Levora418db4d2019-01-30 13:17:50 +0100310
311+---------------------------------------+------------------------------------------------------------------------------+
312| Parameter | Description |
313+=======================================+==============================================================================+
Samuli Silvius426e6c02019-02-06 11:25:01 +0200314| HELM\_CHARTS\_DIR | directory with Helm charts for the application |
Tomáš Levoraf3491542019-02-20 12:59:14 +0100315| | |
Samuli Silviusf3eee9e2019-02-10 13:24:03 +0200316| | Example: /tmp/oom-clone/kubernetes |
Tomáš Levora418db4d2019-01-30 13:17:50 +0100317+---------------------------------------+------------------------------------------------------------------------------+
Samuli Silvius426e6c02019-02-06 11:25:01 +0200318| APP\_CONFIGURATION | application install configuration (application_configuration.yml) for |
Samuli Silviusf3eee9e2019-02-10 13:24:03 +0200319| | ansible installer and custom ansible role code directories if any. |
Tomáš Levoraf3491542019-02-20 12:59:14 +0100320| | |
321| | Example:: |
322| | |
323| | APP_CONFIGURATION=( |
324| | /tmp/offline-installer/config/application_configuration.yml |
325| | /tmp/offline-installer/patches/onap-casablanca-patch-role |
326| | ) |
327| | |
Tomáš Levora418db4d2019-01-30 13:17:50 +0100328+---------------------------------------+------------------------------------------------------------------------------+
Samuli Silvius426e6c02019-02-06 11:25:01 +0200329| APP\_BINARY\_RESOURCES\_DIR | directory with all (binary) resources for offline infra and application |
Tomáš Levoraf3491542019-02-20 12:59:14 +0100330| | |
Samuli Silviusf3eee9e2019-02-10 13:24:03 +0200331| | Example: /tmp/onap-offline/resources |
Tomáš Levora418db4d2019-01-30 13:17:50 +0100332+---------------------------------------+------------------------------------------------------------------------------+
Samuli Silvius426e6c02019-02-06 11:25:01 +0200333| APP\_AUX\_BINARIES | additional binaries such as docker images loaded during runtime [optional] |
Tomáš Levora418db4d2019-01-30 13:17:50 +0100334+---------------------------------------+------------------------------------------------------------------------------+
335
336Offline installer packages are created with prepopulated data via
337following command run from offline-installer directory
338
339./build/package.sh <project> <version> <packaging target directory>
340
341E.g.
342
Samuli Silviusf3eee9e2019-02-10 13:24:03 +0200343``$ ./build/package.sh onap 1.0.1 /tmp/package"``
Tomáš Levora418db4d2019-01-30 13:17:50 +0100344
345
346So in the target directory you should find tar files with
347
Samuli Silvius426e6c02019-02-06 11:25:01 +0200348offline-<PROJECT\_NAME>-<PROJECT\_VERSION>-sw.tar
Tomáš Levora418db4d2019-01-30 13:17:50 +0100349
Samuli Silvius426e6c02019-02-06 11:25:01 +0200350offline-<PROJECT\_NAME>-<PROJECT\_VERSION>-resources.tar
Tomáš Levora418db4d2019-01-30 13:17:50 +0100351
Samuli Silvius426e6c02019-02-06 11:25:01 +0200352offline-<PROJECT\_NAME>-<PROJECT\_VERSION>-aux-resources.tar