kjaniak | 53ba170 | 2018-10-25 14:25:13 +0200 | [diff] [blame^] | 1 | .. This work is licensed under a Creative Commons Attribution 4.0 International License. |
| 2 | .. http://creativecommons.org/licenses/by/4.0 |
| 3 | |
| 4 | .. _troubleshooting: |
| 5 | |
| 6 | Troubleshooting |
| 7 | =============== |
| 8 | |
| 9 | Deployment/Installation errors |
| 10 | ------------------------------ |
| 11 | |
| 12 | **Missing required parameters** |
| 13 | |
| 14 | :: |
| 15 | |
| 16 | Unexpected error when parsing command line arguments |
| 17 | usage: java org.onap.dcae.collectors.veshv.main.MainKt |
| 18 | Required parameters: p, c |
| 19 | -c,--config-url <arg> URL of ves configuration on consul |
| 20 | -d,--first-request-delay <arg> Delay of first request to consul in |
| 21 | seconds |
| 22 | -H,--health-check-api-port <arg> Health check rest api listen port |
| 23 | -I,--request-interval <arg> Interval of consul configuration |
| 24 | requests in seconds |
| 25 | -i,--idle-timeout-sec <arg> Idle timeout for remote hosts. After |
| 26 | given time without any data exchange |
| 27 | the |
| 28 | connection might be closed. |
| 29 | -k,--key-store <arg> Key store in PKCS12 format |
| 30 | -kp,--key-store-password <arg> Key store password |
| 31 | -l,--ssl-disable Disable SSL encryption |
| 32 | -m,--max-payload-size <arg> Maximum supported payload size in |
| 33 | bytes |
| 34 | -p,--listen-port <arg> Listen port |
| 35 | -t,--trust-store <arg> File with trusted certificate bundle |
| 36 | in PKCS12 format |
| 37 | -tp,--trust-store-password <arg> Trust store password |
| 38 | -u,--dummy If present will start in dummy mode |
| 39 | (dummy external services) |
| 40 | All parameters can be specified as environment variables using |
| 41 | upper-snake-case full name with prefix `VESHV_`. |
| 42 | |
| 43 | |
| 44 | This log message is printed when you do not specify the required parameters (via command line, or in environment variables). |
| 45 | As described in the above log message, there are a few required parameters: |
| 46 | **listen port**, **config url** and either **trust store password** and **key store password** if you want to use SSL, or only **ssl disable** if not. |
| 47 | |
| 48 | To get rid of this error, specify the required parameters. For example: |
| 49 | |
| 50 | - Via command line: |
| 51 | |
| 52 | :: |
| 53 | |
| 54 | <hv-ves run command> --listen-port 6061 --config-url http://consul-url/key-path --key-store-password password --trust-store-password password |
| 55 | |
| 56 | - By defining environment variables: |
| 57 | |
| 58 | :: |
| 59 | |
| 60 | export VESHV_LISTEN_PORT=6061 |
| 61 | export VESHV_CONFIG_URL=http://consul-url/key-path |
| 62 | export VESHV_KEY_STORE_PASSWORD=password |
| 63 | export VESHV_TRUST_STORE_PASSWORD=password |
| 64 | |
| 65 | **NOTE** |
| 66 | |
| 67 | Command line arguments have priority over environment variables. If you configure a parameter in both ways, **HV-VES** applies the one from the command line. |
| 68 | |
| 69 | For more information about **HV-VES** configuration parameters, see :ref:`deployment`. |
| 70 | |
| 71 | Configuration errors |
| 72 | -------------------- |
| 73 | |
| 74 | **Consul service not responding** |
| 75 | |
| 76 | :: |
| 77 | |
| 78 | ap.dcae.collectors.veshv.impl.adapters.HttpAdapter | 2018-10-16T13:13:01.155Z | ERROR | Failed to get resource on path: http://localhost:8500/v1/hv/veshv-config (Connection refused: localhost/127.0.0.1:8500) | | reactor-http-client-epoll-8 |
| 79 | ap.dcae.collectors.veshv.impl.adapters.HttpAdapter | 2018-10-16T13:13:01.155Z | DEBUG | Nested exception: | java.net.ConnectException: Connection refused |
| 80 | ... 10 common frames omitted |
| 81 | Wrapped by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:8500 |
| 82 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) |
| 83 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) |
| 84 | at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327) |
| 85 | at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) |
| 86 | at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:616) |
| 87 | at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:563) |
| 88 | at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480) |
| 89 | at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) |
| 90 | at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) |
| 91 | at java.lang.Thread.run(Thread.java:748) |
| 92 | | reactor-http-client-epoll-8 |
| 93 | rs.veshv.impl.adapters.ConsulConfigurationProvider | 2018-10-16T13:13:01.163Z | WARN | Could not get fresh configuration | java.net.ConnectException: Connection refused |
| 94 | ... 10 common frames omitted |
| 95 | Wrapped by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:8500 |
| 96 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) |
| 97 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) |
| 98 | at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327) |
| 99 | at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) |
| 100 | at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:616) |
| 101 | at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:563) |
| 102 | at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480) |
| 103 | at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) |
| 104 | at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) |
| 105 | at java.lang.Thread.run(Thread.java:748) |
| 106 | | reactor-http-client-epoll-8 |
| 107 | |
| 108 | |
| 109 | |
| 110 | The above three logs indicate that **HV-VES** cannot connect to the Consul service under url given in **--consul-url** parameter. |
| 111 | Make sure Consul is up and running and the **ip + port** combination is correct. |
| 112 | |
| 113 | ==== |
| 114 | |
| 115 | **Missing configuration on Consul** |
| 116 | |
| 117 | :: |
| 118 | |
| 119 | ap.dcae.collectors.veshv.impl.adapters.HttpAdapter | 2018-10-16T13:52:20.585Z | ERROR | Failed to get resource on path: http://consul:8500/v1/kv/veshv-config (HTTP request failed with code: 404. |
| 120 | Failing URI: /v1/kv/veshv-config) | | reactor-http-nio-1 |
| 121 | ap.dcae.collectors.veshv.impl.adapters.HttpAdapter | 2018-10-16T13:52:20.586Z | DEBUG | Nested exception: | reactor.ipc.netty.http.client.HttpClientException: HTTP request failed with code: 404. |
| 122 | Failing URI: /v1/kv/veshv-config |
| 123 | | reactor-http-nio-1 |
| 124 | rs.veshv.impl.adapters.ConsulConfigurationProvider | 2018-10-16T13:52:20.591Z | WARN | Could not get fresh configuration | reactor.ipc.netty.http.client.HttpClientException: HTTP request failed with code: 404. |
| 125 | Failing URI: /v1/kv/veshv-config |
| 126 | | reactor-http-nio-1 |
| 127 | |
| 128 | |
| 129 | **HV-VES** logs this information when connected to Consul, but cannot find any json configuration under given key which in this case is **veshv-config**. |
| 130 | For more information, see :ref:`run_time_configuration` |
| 131 | |
| 132 | ==== |
| 133 | |
| 134 | **Invalid configuration format** |
| 135 | |
| 136 | :: |
| 137 | |
| 138 | rs.veshv.impl.adapters.ConsulConfigurationProvider | 2018-10-16T14:06:14.792Z | INFO | Obtained new configuration from consul: |
| 139 | { |
| 140 | "invalidKey": "value" |
| 141 | } | | reactor-http-nio-1 |
| 142 | rs.veshv.impl.adapters.ConsulConfigurationProvider | 2018-10-16T14:06:14.796Z | WARN | Could not get fresh configuration | java.lang.NullPointerException: null |
| 143 | at org.glassfish.json.JsonObjectBuilderImpl$JsonObjectImpl.getString(JsonObjectBuilderImpl.java:257) |
| 144 | at org.onap.dcae.collectors.veshv.impl.adapters.ConsulConfigurationProvider.createCollectorConfiguration(ConsulConfigurationProvider.kt:113) |
| 145 | at org.onap.dcae.collectors.veshv.impl.adapters.ConsulConfigurationProvider.access$createCollectorConfiguration(ConsulConfigurationProvider.kt:44) |
| 146 | at org.onap.dcae.collectors.veshv.impl.adapters.ConsulConfigurationProvider$invoke$6.invoke(ConsulConfigurationProvider.kt:80) |
| 147 | at org.onap.dcae.collectors.veshv.impl.adapters.ConsulConfigurationProvider$invoke$6.invoke(ConsulConfigurationProvider.kt:44) |
| 148 | at org.onap.dcae.collectors.veshv.impl.adapters.ConsulConfigurationProvider$sam$java_util_function_Function$0.apply(ConsulConfigurationProvider.kt) |
| 149 | at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:100) |
| 150 | at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) |
| 151 | at reactor.core.publisher.FluxFlatMap$FlatMapMain.tryEmitScalar(FluxFlatMap.java:432) |
| 152 | at reactor.core.publisher.FluxFlatMap$FlatMapMain.onNext(FluxFlatMap.java:366) |
| 153 | at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) |
| 154 | at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) |
| 155 | at reactor.core.publisher.FluxFlatMap$FlatMapMain.tryEmit(FluxFlatMap.java:484) |
| 156 | at reactor.core.publisher.FluxFlatMap$FlatMapInner.onNext(FluxFlatMap.java:916) |
| 157 | at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:115) |
| 158 | at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1083) |
| 159 | at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:241) |
| 160 | at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:76) |
| 161 | at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) |
| 162 | at reactor.core.publisher.FluxFilter$FilterSubscriber.onNext(FluxFilter.java:97) |
| 163 | at reactor.ipc.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:213) |
| 164 | at reactor.ipc.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:329) |
| 165 | at reactor.ipc.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:311) |
| 166 | at reactor.ipc.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:605) |
| 167 | at reactor.ipc.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:138) |
| 168 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) |
| 169 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) |
| 170 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) |
| 171 | at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438) |
| 172 | at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) |
| 173 | at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) |
| 174 | at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253) |
| 175 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) |
| 176 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) |
| 177 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) |
| 178 | at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) |
| 179 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) |
| 180 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) |
| 181 | at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) |
| 182 | at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) |
| 183 | at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:628) |
| 184 | at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:563) |
| 185 | at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480) |
| 186 | at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) |
| 187 | at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) |
| 188 | at java.lang.Thread.run(Thread.java:748) |
| 189 | | reactor-http-nio-1 |
| 190 | |
| 191 | |
| 192 | This log is printed when you upload a configuration in an invalid format (for example, with missing fields). In the first log you can see that configuration on Consul is: |
| 193 | |
| 194 | .. code-block:: json |
| 195 | |
| 196 | { |
| 197 | "invalidKey": "value" |
| 198 | } |
| 199 | |
| 200 | The above is not a valid **HV-VES** configuration, therefore **HV-VES** does not apply it and becomes **unhealthy**. |
| 201 | For more information on **Consul configuration**, see :ref:`run_time_configuration`. |
| 202 | |
| 203 | |
| 204 | Message handling errors |
| 205 | ----------------------- |
| 206 | |
| 207 | **Handling messages when invalid kafka url is specified** |
| 208 | |
| 209 | :: |
| 210 | |
| 211 | | reactor-tcp-server-epoll-6 |
| 212 | org.apache.kafka.clients.ClientUtils | 2018-10-19T08:29:36.917Z | WARN | Removing server invalid-kafka:9093 from bootstrap.servers as DNS resolution failed for invalid-kafka | | reactor-tcp-server-epoll-6 |
| 213 | org.apache.kafka.clients.producer.KafkaProducer | 2018-10-19T08:29:36.918Z | INFO | [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms. | | reactor-tcp-server-epoll-6 |
| 214 | org.onap.dcae.collectors.veshv.impl.VesHvCollector | 2018-10-19T08:29:36.962Z | WARN | Error while handling message stream: org.apache.kafka.common.KafkaException (Failed to construct kafka producer) | | reactor-tcp-server-epoll-6 |
| 215 | org.onap.dcae.collectors.veshv.impl.VesHvCollector | 2018-10-19T08:29:36.966Z | DEBUG | Detailed stack trace | org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers |
| 216 | at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:64) |
| 217 | at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:396) |
| 218 | ... 24 common frames omitted |
| 219 | Wrapped by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer |
| 220 | at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:441) |
| 221 | at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:285) |
| 222 | at reactor.kafka.sender.internals.ProducerFactory.createProducer(ProducerFactory.java:33) |
| 223 | at reactor.kafka.sender.internals.DefaultKafkaSender.lambda$new$0(DefaultKafkaSender.java:90) |
| 224 | at reactor.core.publisher.MonoCallable.subscribe(MonoCallable.java:57) |
| 225 | at reactor.core.publisher.MonoPeekFuseable.subscribe(MonoPeekFuseable.java:74) |
| 226 | at reactor.core.publisher.Mono.subscribe(Mono.java:3088) |
| 227 | at reactor.core.publisher.MonoProcessor.add(MonoProcessor.java:531) |
| 228 | at reactor.core.publisher.MonoProcessor.subscribe(MonoProcessor.java:444) |
| 229 | at reactor.core.publisher.MonoFlatMapMany.subscribe(MonoFlatMapMany.java:49) |
| 230 | at reactor.core.publisher.FluxPeek.subscribe(FluxPeek.java:80) |
| 231 | at reactor.core.publisher.FluxFilter.subscribe(FluxFilter.java:52) |
| 232 | at reactor.core.publisher.FluxMap.subscribe(FluxMap.java:62) |
| 233 | at reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:55) |
| 234 | at reactor.core.publisher.FluxPeek.subscribe(FluxPeek.java:83) |
| 235 | at reactor.core.publisher.FluxDoFinally.subscribe(FluxDoFinally.java:73) |
| 236 | at reactor.core.publisher.FluxOnErrorResume.subscribe(FluxOnErrorResume.java:47) |
| 237 | at reactor.core.publisher.MonoIgnoreElements.subscribe(MonoIgnoreElements.java:37) |
| 238 | at reactor.ipc.netty.channel.ChannelOperations.applyHandler(ChannelOperations.java:381) |
| 239 | at reactor.ipc.netty.channel.ChannelOperations.onHandlerStart(ChannelOperations.java:296) |
| 240 | at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) |
| 241 | at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) |
| 242 | at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:315) |
| 243 | at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) |
| 244 | at java.lang.Thread.run(Thread.java:748) |
| 245 | | reactor-tcp-server-epoll-6 |
| 246 | p.dcae.collectors.veshv.impl.socket.NettyTcpServer | 2018-10-19T08:29:36.971Z | INFO | Connection from /172.26.0.6:55574 has been closed | | reactor-tcp-server-epoll-6 |
| 247 | |
| 248 | |
| 249 | **HV-VES** responds with the above when it handles a message and currently applied configuration has invalid Kafka bootstrap server defined. |
| 250 | The configuration read from Consul in this case: |
| 251 | |
| 252 | .. code-block:: json |
| 253 | |
| 254 | { |
| 255 | "dmaap.kafkaBootstrapServers": "invalid-kafka:9093", |
| 256 | "collector.routing": [ |
| 257 | { |
| 258 | "fromDomain": "perf3gpp", |
| 259 | "toTopic": "HV_VES_PERF3GPP" |
| 260 | }, |
| 261 | { |
| 262 | "fromDomain": "heartbeat", |
| 263 | "toTopic": "HV_VES_HEARTBEAT" |
| 264 | } |
| 265 | ] |
| 266 | } |
| 267 | |
| 268 | where **invalid-kafka:9093** is not an existing **ip+port** combination. |
| 269 | |
| 270 | |
| 271 | ==== |
| 272 | |
| 273 | **First creation of topics on kafka** |
| 274 | |
| 275 | |
| 276 | On the first try of creating and publishing to a given kafka topic, **HV-VES** logs the following warnings and creates the requested topics anyway. |
| 277 | |
| 278 | :: |
| 279 | |
| 280 | org.apache.kafka.clients.NetworkClient | 2018-10-22T10:11:53.396Z | WARN | [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {HV_VES_PERF3GPP=LEADER_NOT_AVAILABLE} | | kafka-producer-network-thread | producer-1 |
| 281 | org.apache.kafka.clients.NetworkClient | 2018-10-22T10:11:53.524Z | WARN | [Producer clientId=producer-1] Error while fetching metadata with correlation id 3 : {HV_VES_PERF3GPP=LEADER_NOT_AVAILABLE} | | kafka-producer-network-thread | producer-1 |
| 282 | org.apache.kafka.clients.NetworkClient | 2018-10-22T10:11:53.636Z | WARN | [Producer clientId=producer-1] Error while fetching metadata with correlation id 4 : {HV_VES_PERF3GPP=LEADER_NOT_AVAILABLE} | | kafka-producer-network-thread | producer-1 |
| 283 | org.apache.kafka.clients.NetworkClient | 2018-10-22T10:11:53.749Z | WARN | [Producer clientId=producer-1] Error while fetching metadata with correlation id 5 : {HV_VES_PERF3GPP=LEADER_NOT_AVAILABLE} | | kafka-producer-network-thread | producer-1 |
| 284 | |
| 285 | ==== |
| 286 | |
| 287 | **Kafka service became unavailable after producer for a given topic was successfully created** |
| 288 | |
| 289 | |
| 290 | After receiving a **Ves Common Event**, **HV-VES** creates a producer for a given topic and keeps it for the whole lifetime of an application. |
| 291 | If Kafka service becomes unreachable after the producer creation, you will see the following logs when trying to establish another connection with the Kafka server. |
| 292 | |
| 293 | :: |
| 294 | |
| 295 | org.apache.kafka.clients.NetworkClient | 2018-10-22T10:04:08.604Z | WARN | [Producer clientId=producer-1] Connection to node 1001 could not be established. Broker may not be available. | | kafka-producer-network-thread | producer-1 |
| 296 | org.apache.kafka.clients.NetworkClient | 2018-10-22T10:04:11.896Z | WARN | [Producer clientId=producer-1] Connection to node 1001 could not be established. Broker may not be available. | | kafka-producer-network-thread | producer-1 |
| 297 | org.apache.kafka.clients.NetworkClient | 2018-10-22T10:04:14.968Z | WARN | [Producer clientId=producer-1] Connection to node 1001 could not be established. Broker may not be available. | | kafka-producer-network-thread | producer-1 |
| 298 | org.apache.kafka.clients.NetworkClient | 2018-10-22T10:04:18.040Z | WARN | [Producer clientId=producer-1] Connection to node 1001 could not be established. Broker may not be available. | | kafka-producer-network-thread | producer-1 |
| 299 | org.apache.kafka.clients.NetworkClient | 2018-10-22T10:04:21.111Z | WARN | [Producer clientId=producer-1] Connection to node 1001 could not be established. Broker may not be available. | | kafka-producer-network-thread | producer-1 |
| 300 | reactor.kafka.sender.internals.DefaultKafkaSender | 2018-10-22T10:04:23.519Z | ERROR | error {} | org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for HV_VES_PERF3GPP-0: 30050 ms has passed since batch creation plus linger time |
| 301 | | kafka-producer-network-thread | producer-1 |
| 302 | cae.collectors.veshv.impl.adapters.kafka.KafkaSink | 2018-10-22T10:04:23.522Z | WARN | Failed to send message to Kafka | org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for HV_VES_PERF3GPP-0: 30050 ms has passed since batch creation plus linger time |
| 303 | | single-1 |
| 304 | org.onap.dcae.collectors.veshv.impl.VesHvCollector | 2018-10-22T10:04:23.528Z | WARN | Error while handling message stream: org.apache.kafka.common.errors.TimeoutException (Expiring 1 record(s) for HV_VES_PERF3GPP-0: 30050 ms has passed since batch creation plus linger time) | | single-1 |
| 305 | |
| 306 | To resolve this issue, **HV-VES** restart is required. |
| 307 | |
| 308 | ==== |
| 309 | |
| 310 | **Message with too big payload size** |
| 311 | |
| 312 | :: |
| 313 | |
| 314 | g.onap.dcae.collectors.veshv.impl.VesHvCollector | 2018-10-19T08:53:18.349Z | WARN | Error while handling message stream: org.onap.dcae.collectors.veshv.impl.wire.WireFrameException (PayloadSizeExceeded: payload size exceeds the limit (1048576 bytes)) | | single-1 |
| 315 | org.onap.dcae.collectors.veshv.impl.VesHvCollector | 2018-10-19T08:53:18.349Z | DEBUG | Detailed stack trace | org.onap.dcae.collectors.veshv.impl.wire.WireFrameException: PayloadSizeExceeded: payload size exceeds the limit (1048576 bytes) |
| 316 | at org.onap.dcae.collectors.veshv.impl.wire.WireChunkDecoder$onError$1$1.invoke(WireChunkDecoder.kt:67) |
| 317 | at org.onap.dcae.collectors.veshv.impl.wire.WireChunkDecoder$onError$1$1.invoke(WireChunkDecoder.kt:38) |
| 318 | at arrow.effects.IO$Companion$invoke$1.invoke(IO.kt:28) |
| 319 | at arrow.effects.IO$Companion$invoke$1.invoke(IO.kt:22) |
| 320 | at arrow.effects.IORunLoop.step(IORunLoop.kt:50) |
| 321 | at arrow.effects.IO.unsafeRunTimed(IO.kt:109) |
| 322 | at arrow.effects.IO.unsafeRunSync(IO.kt:106) |
| 323 | at org.onap.dcae.collectors.veshv.impl.wire.WireChunkDecoder$generateFrames$1.accept(WireChunkDecoder.kt:61) |
| 324 | at org.onap.dcae.collectors.veshv.impl.wire.WireChunkDecoder$generateFrames$1.accept(WireChunkDecoder.kt:38) |
| 325 | at reactor.core.publisher.FluxGenerate.lambda$new$1(FluxGenerate.java:56) |
| 326 | at reactor.core.publisher.FluxGenerate$GenerateSubscription.slowPath(FluxGenerate.java:262) |
| 327 | at reactor.core.publisher.FluxGenerate$GenerateSubscription.request(FluxGenerate.java:204) |
| 328 | at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.request(FluxPeekFuseable.java:138) |
| 329 | at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:1454) |
| 330 | at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:1328) |
| 331 | at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onSubscribe(FluxPeekFuseable.java:172) |
| 332 | at reactor.core.publisher.FluxGenerate.subscribe(FluxGenerate.java:83) |
| 333 | at reactor.core.publisher.FluxPeekFuseable.subscribe(FluxPeekFuseable.java:86) |
| 334 | at reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:55) |
| 335 | at reactor.core.publisher.Flux.subscribe(Flux.java:6877) |
| 336 | at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.drain(FluxConcatMap.java:418) |
| 337 | at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.onNext(FluxConcatMap.java:241) |
| 338 | at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:185) |
| 339 | at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:185) |
| 340 | at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) |
| 341 | at reactor.ipc.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:213) |
| 342 | at reactor.ipc.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:329) |
| 343 | at reactor.ipc.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:311) |
| 344 | at reactor.ipc.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:138) |
| 345 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) |
| 346 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) |
| 347 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) |
| 348 | at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) |
| 349 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) |
| 350 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) |
| 351 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) |
| 352 | at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) |
| 353 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) |
| 354 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) |
| 355 | at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) |
| 356 | at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) |
| 357 | at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) |
| 358 | at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) |
| 359 | at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) |
| 360 | at java.lang.Thread.run(Thread.java:748) |
| 361 | | single-1 |
| 362 | p.dcae.collectors.veshv.impl.socket.NettyTcpServer | 2018-10-19T08:53:18.351Z | INFO | Connection from /172.26.0.6:56924 has been closed | | single-1 |
| 363 | |
| 364 | |
| 365 | |
| 366 | The above log is printed when the message payload size is too big. **HV-VES** does not handle messages that exceed specified payload size. Default value is **1048576 bytes (1MiB)**, but it can be configured via cmd or by environment variables. |
| 367 | |
| 368 | |
| 369 | |
| 370 | ==== |
| 371 | |
| 372 | **Other invalid messages** |
| 373 | |
| 374 | |
| 375 | Messages with **invalid wire frame** or **invalid gpb** data are ommitted and **HV-VES** only logs the connection-related logs as follows: |
| 376 | |
| 377 | :: |
| 378 | |
| 379 | p.dcae.collectors.veshv.impl.socket.NettyTcpServer | 2018-10-19T09:03:03.345Z | INFO | Handling connection from /172.26.0.6:57432 | | reactor-tcp-server-epoll-5 |
| 380 | p.dcae.collectors.veshv.impl.socket.NettyTcpServer | 2018-10-19T09:04:03.351Z | INFO | Idle timeout of 60 s reached. Closing connection from /172.26.0.6:57432... | | reactor-tcp-server-epoll-5 |
| 381 | p.dcae.collectors.veshv.impl.socket.NettyTcpServer | 2018-10-19T09:04:03.353Z | INFO | Connection from /172.26.0.6:57432 has been closed | | reactor-tcp-server-epoll-5 |
| 382 | p.dcae.collectors.veshv.impl.socket.NettyTcpServer | 2018-10-19T09:04:03.354Z | DEBUG | Channel (/172.26.0.6:57432) closed successfully. | | reactor-tcp-server-epoll-5 |
| 383 | |
| 384 | |
| 385 | For more information, see the :ref:`hv_ves_behaviors` section. |
| 386 | |
| 387 | Authorization related errors |
| 388 | ---------------------------- |
| 389 | |
| 390 | **WARNING: SSL/TLS authorization is a part of an experimental feature for ONAP Casablanca release and should be treated as unstable and subject to change in future releases.** |
| 391 | **For more information, see** :ref:`authorization`. |
| 392 | |
| 393 | **Key or trust store missing** |
| 394 | |
| 395 | :: |
| 396 | |
| 397 | org.onap.dcae.collectors.veshv.main | 2018-10-22T06:51:54.191Z | ERROR | Failed to start a server | java.io.FileNotFoundException: /etc/ves-hv/server.p12 (No such file or directory) |
| 398 | at java.io.FileInputStream.open0(Native Method) |
| 399 | at java.io.FileInputStream.open(FileInputStream.java:195) |
| 400 | at java.io.FileInputStream.<init>(FileInputStream.java:138) |
| 401 | at org.onap.dcae.collectors.veshv.ssl.boundary.UtilsKt$streamFromFile$1.invoke(utils.kt:79) |
| 402 | at org.onap.dcae.collectors.veshv.ssl.boundary.UtilsKt$streamFromFile$1.invoke(utils.kt) |
| 403 | at org.onap.dcae.collectors.veshv.ssl.impl.SslFactories.loadKeyStoreFromFile(SslFactories.kt:50) |
| 404 | at org.onap.dcae.collectors.veshv.ssl.impl.SslFactories.keyManagerFactory(SslFactories.kt:43) |
| 405 | at org.onap.dcae.collectors.veshv.ssl.boundary.ServerSslContextFactory.jdkContext(ServerSslContextFactory.kt:42) |
| 406 | at org.onap.dcae.collectors.veshv.ssl.boundary.SslContextFactory.createSslContextWithConfiguredCerts(SslContextFactory.kt:49) |
| 407 | at org.onap.dcae.collectors.veshv.ssl.boundary.SslContextFactory.createSslContext(SslContextFactory.kt:39) |
| 408 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer.configureServer(NettyTcpServer.kt:61) |
| 409 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer.access$configureServer(NettyTcpServer.kt:46) |
| 410 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$start$1$ctx$1.invoke(NettyTcpServer.kt:52) |
| 411 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$start$1$ctx$1.invoke(NettyTcpServer.kt:46) |
| 412 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$sam$java_util_function_Consumer$0.accept(NettyTcpServer.kt) |
| 413 | at reactor.ipc.netty.tcp.TcpServer.<init>(TcpServer.java:149) |
| 414 | at reactor.ipc.netty.tcp.TcpServer$Builder.build(TcpServer.java:278) |
| 415 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$start$1.invoke(NettyTcpServer.kt:53) |
| 416 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$start$1.invoke(NettyTcpServer.kt:46) |
| 417 | at arrow.effects.IO$Companion$invoke$1.invoke(IO.kt:28) |
| 418 | at arrow.effects.IO$Companion$invoke$1.invoke(IO.kt:22) |
| 419 | at arrow.effects.IORunLoop.step(IORunLoop.kt:50) |
| 420 | at arrow.effects.IO.unsafeRunTimed(IO.kt:109) |
| 421 | at arrow.effects.IO.unsafeRunSync(IO.kt:106) |
| 422 | at org.onap.dcae.collectors.veshv.utils.arrow.EffectsKt.unsafeRunEitherSync(effects.kt:50) |
| 423 | at org.onap.dcae.collectors.veshv.main.MainKt.main(main.kt:41) |
| 424 | | main |
| 425 | |
| 426 | |
| 427 | The above error is logged when key store is not provided. Similarly, when trust store is not provided, **/etc/ves-hv/trust.p12** file missing is logged. |
| 428 | **server.p12** and **trust.p12** are default names of key and trust stores. They can be changed by specifying **--trust-store** or **--key-store** command line arguments on deployment. |
| 429 | |
| 430 | ==== |
| 431 | |
| 432 | **Invalid credentials** |
| 433 | |
| 434 | :: |
| 435 | |
| 436 | org.onap.dcae.collectors.veshv.main | 2018-10-22T06:59:24.039Z | ERROR | Failed to start a server | java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption. |
| 437 | ... 23 common frames omitted |
| 438 | Wrapped by: java.io.IOException: keystore password was incorrect |
| 439 | at sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:2059) |
| 440 | at java.security.KeyStore.load(KeyStore.java:1445) |
| 441 | at org.onap.dcae.collectors.veshv.ssl.impl.SslFactories.loadKeyStoreFromFile(SslFactories.kt:51) |
| 442 | at org.onap.dcae.collectors.veshv.ssl.impl.SslFactories.keyManagerFactory(SslFactories.kt:43) |
| 443 | at org.onap.dcae.collectors.veshv.ssl.boundary.ServerSslContextFactory.jdkContext(ServerSslContextFactory.kt:42) |
| 444 | at org.onap.dcae.collectors.veshv.ssl.boundary.SslContextFactory.createSslContextWithConfiguredCerts(SslContextFactory.kt:49) |
| 445 | at org.onap.dcae.collectors.veshv.ssl.boundary.SslContextFactory.createSslContext(SslContextFactory.kt:39) |
| 446 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer.configureServer(NettyTcpServer.kt:61) |
| 447 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer.access$configureServer(NettyTcpServer.kt:46) |
| 448 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$start$1$ctx$1.invoke(NettyTcpServer.kt:52) |
| 449 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$start$1$ctx$1.invoke(NettyTcpServer.kt:46) |
| 450 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$sam$java_util_function_Consumer$0.accept(NettyTcpServer.kt) |
| 451 | at reactor.ipc.netty.tcp.TcpServer.<init>(TcpServer.java:149) |
| 452 | at reactor.ipc.netty.tcp.TcpServer$Builder.build(TcpServer.java:278) |
| 453 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$start$1.invoke(NettyTcpServer.kt:53) |
| 454 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$start$1.invoke(NettyTcpServer.kt:46) |
| 455 | at arrow.effects.IO$Companion$invoke$1.invoke(IO.kt:28) |
| 456 | at arrow.effects.IO$Companion$invoke$1.invoke(IO.kt:22) |
| 457 | at arrow.effects.IORunLoop.step(IORunLoop.kt:50) |
| 458 | at arrow.effects.IO.unsafeRunTimed(IO.kt:109) |
| 459 | at arrow.effects.IO.unsafeRunSync(IO.kt:106) |
| 460 | at org.onap.dcae.collectors.veshv.utils.arrow.EffectsKt.unsafeRunEitherSync(effects.kt:50) |
| 461 | at org.onap.dcae.collectors.veshv.main.MainKt.main(main.kt:41) |
| 462 | | main |
| 463 | |
| 464 | |
| 465 | Key or trust store password provided in configuration is invalid. |
| 466 | |
| 467 | ==== |
| 468 | |
| 469 | **Invalid key store file** |
| 470 | |
| 471 | :: |
| 472 | |
| 473 | org.onap.dcae.collectors.veshv.main | 2018-10-22T09:11:38.200Z | ERROR | Failed to start a server | java.io.IOException: DerInputStream.getLength(): lengthTag=111, too big. |
| 474 | at sun.security.util.DerInputStream.getLength(DerInputStream.java:599) |
| 475 | at sun.security.util.DerValue.init(DerValue.java:391) |
| 476 | at sun.security.util.DerValue.<init>(DerValue.java:332) |
| 477 | at sun.security.util.DerValue.<init>(DerValue.java:345) |
| 478 | at sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:1938) |
| 479 | at java.security.KeyStore.load(KeyStore.java:1445) |
| 480 | at org.onap.dcae.collectors.veshv.ssl.impl.SslFactories.loadKeyStoreFromFile(SslFactories.kt:51) |
| 481 | at org.onap.dcae.collectors.veshv.ssl.impl.SslFactories.keyManagerFactory(SslFactories.kt:43) |
| 482 | at org.onap.dcae.collectors.veshv.ssl.boundary.ServerSslContextFactory.jdkContext(ServerSslContextFactory.kt:42) |
| 483 | at org.onap.dcae.collectors.veshv.ssl.boundary.SslContextFactory.createSslContextWithConfiguredCerts(SslContextFactory.kt:49) |
| 484 | at org.onap.dcae.collectors.veshv.ssl.boundary.SslContextFactory.createSslContext(SslContextFactory.kt:39) |
| 485 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer.configureServer(NettyTcpServer.kt:61) |
| 486 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer.access$configureServer(NettyTcpServer.kt:46) |
| 487 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$start$1$ctx$1.invoke(NettyTcpServer.kt:52) |
| 488 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$start$1$ctx$1.invoke(NettyTcpServer.kt:46) |
| 489 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$sam$java_util_function_Consumer$0.accept(NettyTcpServer.kt) |
| 490 | at reactor.ipc.netty.tcp.TcpServer.<init>(TcpServer.java:149) |
| 491 | at reactor.ipc.netty.tcp.TcpServer$Builder.build(TcpServer.java:278) |
| 492 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$start$1.invoke(NettyTcpServer.kt:53) |
| 493 | at org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer$start$1.invoke(NettyTcpServer.kt:46) |
| 494 | at arrow.effects.IO$Companion$invoke$1.invoke(IO.kt:28) |
| 495 | at arrow.effects.IO$Companion$invoke$1.invoke(IO.kt:22) |
| 496 | at arrow.effects.IORunLoop.step(IORunLoop.kt:50) |
| 497 | at arrow.effects.IO.unsafeRunTimed(IO.kt:109) |
| 498 | at arrow.effects.IO.unsafeRunSync(IO.kt:106) |
| 499 | at org.onap.dcae.collectors.veshv.utils.arrow.EffectsKt.unsafeRunEitherSync(effects.kt:50) |
| 500 | at org.onap.dcae.collectors.veshv.main.MainKt.main(main.kt:41) |
| 501 | | main |
| 502 | |
| 503 | The above is logged when provided keystore has invalid or corrupted content. |
| 504 | This log also appears when you try to use key store/trust store in archive format other than **PKCS12** (the only supported by **HV-VES** store type). |
| 505 | |
| 506 | |