Failed Migration from 3(?.?) to 3.5. Cassandra Configuration?

Hi guys, I'm trying to upgrade to latest version of Upsource (3.5)

I take a backup from current version and follow standard upgrade procedure (Did that many times before).


After some time through re-indexing, Front End process stops working properly. Logs suggest that its not able to talk to Cassandra, but I'm not sure what triggers the problem in the first place. This problem appears to be new to 3.5, I've never seen it before.

Here're some extract from relevant front end logs:

 

[2016-10-20 11:11:36,484] ERROR lCluster executor #1 impl.DatabaseFlushExecutorImpl - Error writing to db 2-sonar-trunk
java.lang.RuntimeException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10040 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.jetbrains.upsource.database.datastax.WriteExecutor.execute(WriteExecutor.java:48)
at com.jetbrains.upsource.database.datastax.ThroughDbMediator.submitSync(ThroughDbMediator.java:58)
at com.jetbrains.upsource.database.datastax.buffering.CqlWriteBuffer.flush(CqlWriteBuffer.java:194)
at com.jetbrains.upsource.database.datastax.buffering.CqlBufferedQueue$StorageChunk.flush(CqlBufferedQueue.java:78)
at com.jetbrains.upsource.db.impl.AbstractBufferingWriter.flushChunk(AbstractBufferingWriter.java:143)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.a(DatabaseFlushExecutorImpl.java:253)
at __.db_2-sonar-trunk.__(JavaGeneratorTemplate.java:44)
at org.jonnyzzz.stack.NamedStackFrame.frame(NamedStackFrame.java:48)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.d(DatabaseFlushExecutorImpl.java:252)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.run(DatabaseFlushExecutorImpl.java:235)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10040 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.datastax.driver.core.WriteSessionWrapper.execute(WriteSessionWrapper.java:50)
at com.jetbrains.upsource.database.datastax.buffering.CqlWriteBuffer.lambda$null$19(CqlWriteBuffer.java:226)
at com.jetbrains.upsource.database.datastax.WriteExecutor.execute(WriteExecutor.java:31)
... 10 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10040 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:208)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:274)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.run(RequestHandler.java:429)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
[2016-10-20 11:11:36,492] ERROR CHANGELIST_CLUSTER-1 sonar-trunk eImpl$DriverWrapperWithLogging - Error while analyzing 'infer-changes' DEP-{RevisionId{sonar-trunk, branches/9240_1-75228} @2016-Oct-18 11:00:53}-cluster=VCS_CHANGELIST_CLUSTER direction OLD. java.lang.RuntimeException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10040 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
java.util.concurrent.CompletionException: java.lang.RuntimeException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10040 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
at com.jetbrains.upsource.database.datastax.CqlDatabase$1.flush(CqlDatabase.java:71)
at com.jetbrains.upsource.db.impl.StaticBufferingTable.a(StaticBufferingTable.java:67)
at com.jetbrains.upsource.stats.StatsAccumulator.a(StatsAccumulator.java:41)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:49)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:40)
at com.jetbrains.upsource.db.impl.StaticBufferingTable.putRow(StaticBufferingTable.java:66)
at com.jetbrains.upsource.backend.server.core.db.ProjectFilesTable.addValues(ProjectFilesTable.java:39)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.analyze(ImportProjectInferChanges.java:199)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.analyze(ImportProjectInferChanges.java:108)
at com.jetbrains.upsource.backend.cli.stages.driver.DriverWrapper.analyze(DriverWrapper.java:78)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:79)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:51)
at com.jetbrains.upsource.lifetimes.LifetimeImpl.runSync(LifetimeImpl.java:76)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.indexTasks(IndexerPipelineRunner.java:49)
at com.jetbrains.upsource.backend.cli.multi.revisions.DriverClusterPipelineImpl.a(DriverClusterPipelineImpl.java:112)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$2.run(MultiProjectTaskFactory.java:176)
at __.project_sonar-trunk.__(JavaGeneratorTemplate.java:44)
at org.jonnyzzz.stack.NamedStackFrame.frame(NamedStackFrame.java:48)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor.executeTask(MultiProjectTaskFactory.java:173)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$1.run(MultiProjectTaskFactory.java:154)
at com.jetbrains.upsource.backend.cli.multi.executor.ProjectSyncExecutor$ProjectTasks$1$1.run(ProjectSyncExecutor.java:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.jetbrains.upsource.util.NamedDaemonThreadFactory.a(NamedDaemonThreadFactory.java:34)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10040 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.jetbrains.upsource.database.datastax.WriteExecutor.execute(WriteExecutor.java:48)
at com.jetbrains.upsource.database.datastax.ThroughDbMediator.submitSync(ThroughDbMediator.java:58)
at com.jetbrains.upsource.database.datastax.buffering.CqlWriteBuffer.flush(CqlWriteBuffer.java:194)
at com.jetbrains.upsource.database.datastax.buffering.CqlBufferedQueue$StorageChunk.flush(CqlBufferedQueue.java:78)
at com.jetbrains.upsource.db.impl.AbstractBufferingWriter.flushChunk(AbstractBufferingWriter.java:143)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.a(DatabaseFlushExecutorImpl.java:253)
at __.db_2-sonar-trunk.__(JavaGeneratorTemplate.java:44)
at org.jonnyzzz.stack.NamedStackFrame.frame(NamedStackFrame.java:48)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.d(DatabaseFlushExecutorImpl.java:252)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.run(DatabaseFlushExecutorImpl.java:235)
... 1 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10040 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.datastax.driver.core.WriteSessionWrapper.execute(WriteSessionWrapper.java:50)
at com.jetbrains.upsource.database.datastax.buffering.CqlWriteBuffer.lambda$null$19(CqlWriteBuffer.java:226)
at com.jetbrains.upsource.database.datastax.WriteExecutor.execute(WriteExecutor.java:31)
... 10 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10040 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:208)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:274)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.run(RequestHandler.java:429)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more

 

 

 

And these just continue on

13 comments
Comment actions Permalink

Hi Vadim,

What does Cassandra log say for the same period of time?

0
Comment actions Permalink

Nothing useful that I can see in the error logs:
[2016-10-20 11:01:05,720] ================================================================ (start)
[2016-10-20 11:01:06,557] [Apache Cassandra Error] 2016-10-20T11:01:06,554 [[APP-WRAPPER] Proxy 1] WARN o.a.cassandra.service.StartupChecks - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
[2016-10-20 11:01:06,678] [Apache Cassandra Error] 2016-10-20T11:01:06,673 [[APP-WRAPPER] Proxy 1] WARN o.a.c.config.DatabaseDescriptor - Only 59861 MB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots
[2016-10-20 11:01:06,729] [Apache Cassandra Error] 2016-10-20T11:01:06,720 [[APP-WRAPPER] Proxy 1] WARN o.a.cassandra.service.StartupChecks - Directory C:\Upsource\data\cassandra\data\hints doesn't exist
[2016-10-20 11:01:08,397] [Apache Cassandra Error] 2016-10-20T11:01:08,394 [[APP-WRAPPER] Proxy 1] WARN o.apache.cassandra.db.SystemKeyspace - No host ID found, created a7a94cb8-428b-483d-821b-aa3f335b8756 (Note: This should happen exactly once per node).
[2016-10-20 11:33:30,750] ================================================================ (finish)
[2016-10-20 11:33:30,756] ================================================================ (start)
[2016-10-20 11:33:31,580] [Apache Cassandra Error] 2016-10-20T11:33:31,570 [[APP-WRAPPER] Proxy 1] WARN o.a.cassandra.service.StartupChecks - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
[2016-10-20 11:33:31,690] [Apache Cassandra Error] 2016-10-20T11:33:31,685 [[APP-WRAPPER] Proxy 1] WARN o.a.c.config.DatabaseDescriptor - Only 55147 MB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots
[2016-10-20 11:33:40,498] ================================================================ (finish)

 

 

 

The following is in the warning.log:

2016-10-20T11:01:06,554 [[APP-WRAPPER] Proxy 1] WARN o.a.cassandra.service.StartupChecks - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
2016-10-20T11:01:06,673 [[APP-WRAPPER] Proxy 1] WARN o.a.c.config.DatabaseDescriptor - Only 59861 MB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots
2016-10-20T11:01:06,720 [[APP-WRAPPER] Proxy 1] WARN o.a.cassandra.service.StartupChecks - Directory C:\Upsource\data\cassandra\data\hints doesn't exist
2016-10-20T11:01:08,394 [[APP-WRAPPER] Proxy 1] WARN o.apache.cassandra.db.SystemKeyspace - No host ID found, created a7a94cb8-428b-483d-821b-aa3f335b8756 (Note: This should happen exactly once per node).
2016-10-20T11:01:16,231 [SharedPool-Worker-2] WARN o.apache.cassandra.utils.FBUtilities - Trigger directory doesn't exist, please create it and try again.
2016-10-20T11:02:47,984 [PerDiskMemtableFlushWriter_0:2] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:1 (138763609 bytes)
2016-10-20T11:02:48,473 [PerDiskMemtableFlushWriter_0:2] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (147586340 bytes)
2016-10-20T11:03:11,940 [PerDiskMemtableFlushWriter_0:2] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:1 (111698907 bytes)
2016-10-20T11:03:13,724 [PerDiskMemtableFlushWriter_0:2] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (475621590 bytes)
2016-10-20T11:03:21,580 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:1 (143875008 bytes)
2016-10-20T11:03:32,339 [PerDiskMemtableFlushWriter_0:2] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:1 (138772487 bytes)
2016-10-20T11:03:34,153 [PerDiskMemtableFlushWriter_0:2] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (414716277 bytes)
2016-10-20T11:03:40,152 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (623208042 bytes)
2016-10-20T11:03:48,574 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:1 (144331093 bytes)
2016-10-20T11:04:06,039 [PerDiskMemtableFlushWriter_0:2] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:1 (386323353 bytes)
2016-10-20T11:04:07,169 [PerDiskMemtableFlushWriter_0:2] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (229651069 bytes)
2016-10-20T11:04:19,506 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (1037184176 bytes)
2016-10-20T11:04:35,312 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:1 (506316313 bytes)
2016-10-20T11:05:13,060 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (1266657042 bytes)
2016-10-20T11:05:18,090 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:1 (281042006 bytes)
2016-10-20T11:05:19,115 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (303561507 bytes)
2016-10-20T11:05:42,719 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:1 (683842071 bytes)
2016-10-20T11:05:47,201 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (550004141 bytes)
2016-10-20T11:06:21,451 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (540916659 bytes)
2016-10-20T11:06:29,508 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (1330435362 bytes)
2016-10-20T11:06:50,007 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:1 (687898549 bytes)
2016-10-20T11:06:50,344 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (523320873 bytes)
2016-10-20T11:07:12,187 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (551189599 bytes)
2016-10-20T11:07:49,706 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (514357140 bytes)
2016-10-20T11:08:02,154 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (1336721874 bytes)
2016-10-20T11:08:17,640 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (553004396 bytes)
2016-10-20T11:08:53,766 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (527415570 bytes)
2016-10-20T11:09:19,942 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (489292937 bytes)
2016-10-20T11:09:29,329 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (1339888069 bytes)
2016-10-20T11:09:37,657 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (514090430 bytes)
2016-10-20T11:10:19,727 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (472108744 bytes)
2016-10-20T11:10:47,214 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (514002651 bytes)
2016-10-20T11:10:56,027 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (1340361592 bytes)
2016-10-20T11:11:05,787 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (509006207 bytes)
2016-10-20T11:12:20,667 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (1344898212 bytes)
2016-10-20T11:33:31,570 [[APP-WRAPPER] Proxy 1] WARN o.a.cassandra.service.StartupChecks - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
2016-10-20T11:33:31,685 [[APP-WRAPPER] Proxy 1] WARN o.a.c.config.DatabaseDescriptor - Only 55147 MB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots

 

 

 

Overall I don't see  anything in the Cassandra logs to indicate there's a problem.

I don't think there's any sensitive info in them. If it helps i can add the rest of them, but probably won't help much?

0
Comment actions Permalink

2 things that might help us to understand the nature of the issue:

1. Where there any Cassandra restarts during this period of time? (Launcher.log file should help here)

2. What does info.log indicate for 2016-10-20 11:11 and around?

0
Comment actions Permalink

Hi Artem,

 

 

1)

There definitely weren't any manual restarts. And doesn't seem like there were any "automatic" ones either. In fact Im quite sure Cassandra java process keeps on running.

Relevant logs from Launcher.log

[2016-10-20 11:00:39,735] TRACE - root - [configure] ================================================================ (finish)
[2016-10-20 11:00:40,526] INFO - ains.launcher.run.AgentProcess - Upsource process finished
[2016-10-20 11:00:40,535] DEBUG - er.run.UpToDateLauncherContext - Upsource config files were changed, reinitializing...
[2016-10-20 11:00:42,578] DEBUG - ins.launcher.util.LauncherUtil - Deleting <Upsource Home>\logs\upsource.pid
[2016-10-20 11:00:42,591] DEBUG - ins.launcher.util.LauncherUtil - Deleting <Upsource Home>\logs\upsource.port
[2016-10-20 11:00:42,591] DEBUG - ins.launcher.util.LauncherUtil - Deleting <Upsource Home>\logs\upsource.exit.flag
[2016-10-20 11:00:42,591] DEBUG - ins.launcher.util.LauncherUtil - Deleting <Upsource Home>\logs\upsource.start.flag
[2016-10-20 11:00:42,592] DEBUG - rains.launcher.run.AgentRunner - Upsource has exited with code: 23 (RESTART)
[2016-10-20 11:00:42,593] INFO - rains.launcher.run.AgentRunner - Launcher is restarting Upsource process
[2016-10-20 11:00:42,594] DEBUG - ains.launcher.run.AgentProcess - Thu Oct 20 11:00:42 AEDT 2016 ==> Start launch
[2016-10-20 11:00:42,594] DEBUG - ins.launcher.util.LauncherUtil - Deleting <Upsource Home>\logs\upsource.pid
[2016-10-20 11:00:42,594] DEBUG - ins.launcher.util.LauncherUtil - Deleting <Upsource Home>\logs\upsource.port
[2016-10-20 11:00:42,594] DEBUG - ins.launcher.util.LauncherUtil - Deleting <Upsource Home>\logs\upsource.exit.flag
[2016-10-20 11:00:42,596] DEBUG - tbrains.launcher.run.JavaAgent - Launching Upsource process with command: [C:\Upsource\internal\java\windows-amd64\jre\bin\java.exe, -Djl.service=Upsource, -Djl.home=C:\Upsource, -ea, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=logs, -XX:ErrorFile=logs\hs_err_pid%p.log, -Dfile.encoding=UTF-8, -XX:MaxMetaspaceSize=150m, -Xmx684m, -Djetbrains.ring.jul_to_slf4j.log.level=INFO, -jar, launcher\lib\app-wrapper\upsource-wrapper.jar, AppStarter, com.jetbrains.bundle.bootstrap.Bundle] (at path: "C:\Upsource", system properties: {launcher.app.home=C:\Upsource, launcher.app.conf.dir=C:\Upsource\conf, launcher.start.kind=1, java.awt.headless=true, launcher.app.name=Upsource})
[2016-10-20 11:33:22,287] TRACE - root - [stop] ================================================================ (start)
[2016-10-20 11:33:22,304] DEBUG - ains.launcher.ep.commands.Stop - [stop] JetLauncher process ID: 3628
[2016-10-20 11:33:22,305] DEBUG - ains.launcher.ep.commands.Stop - [stop] Using Java: <Upsource Home>\internal\java\windows-amd64\jre (version "1.8.0_101")
[2016-10-20 11:33:22,305] DEBUG - ains.launcher.ep.commands.Stop - [stop] Upsource home directory: C:\Upsource
[2016-10-20 11:33:22,305] DEBUG - ains.launcher.ep.commands.Stop - [stop] Command line: [stop]
[2016-10-20 11:33:22,305] DEBUG - ains.launcher.ep.commands.Stop - [stop] JetLauncher logs directory: <Upsource Home>\logs
[2016-10-20 11:33:22,305] DEBUG - ains.launcher.ep.commands.Stop - [stop] JetLauncher version: 1.0.24
[2016-10-20 11:33:22,306] DEBUG - ains.launcher.ep.commands.Stop - [stop] Loaded launcher plugins: [ConfPathProvider, mac-daemon-commands, win-service-commands]
[2016-10-20 11:33:22,306] DEBUG - ains.launcher.ep.commands.Stop - [stop] Upsource config folder: <Upsource Home>\conf
[2016-10-20 11:33:22,306] DEBUG - .BaseLauncherConfig$BaseParser - [stop] Using launcher configuration file: jar:file:/<Upsource Home>/launcher/lib/upsource-launcher.jar!/launcher.java.config
[2016-10-20 11:33:41,635] INFO - ains.launcher.run.AgentProcess - Upsource process finished
[2016-10-20 11:33:41,646] DEBUG - ins.launcher.util.LauncherUtil - Deleting <Upsource Home>\logs\upsource.pid
[2016-10-20 11:33:41,646] DEBUG - ins.launcher.util.LauncherUtil - Deleting <Upsource Home>\logs\upsource.port
[2016-10-20 11:33:41,647] DEBUG - ins.launcher.util.LauncherUtil - Deleting <Upsource Home>\logs\upsource.exit.flag
[2016-10-20 11:33:41,647] DEBUG - ins.launcher.util.LauncherUtil - Deleting <Upsource Home>\logs\upsource.start.flag
[2016-10-20 11:33:41,647] DEBUG - rains.launcher.run.AgentRunner - Upsource has exited with code: 29 (EXIT)
[2016-10-20 11:33:41,647] DEBUG - rains.launcher.run.AgentRunner - Exit flag is set, launcher is exiting
[2016-10-20 11:33:41,647] INFO - rains.launcher.run.AgentRunner - Launcher is exiting
[2016-10-20 11:33:41,649] TRACE - root - ================================================================ (finish)

 

 

 

 

 

2)

2016-10-20T11:10:19,727 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (472108744 bytes)
2016-10-20T11:10:47,214 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (514002651 bytes)
2016-10-20T11:10:56,027 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (1340361592 bytes)
2016-10-20T11:11:05,787 [PerDiskMemtableFlushWriter_0:3] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (509006207 bytes)
2016-10-20T11:12:20,667 [CompactionExecutor:4] WARN o.a.c.i.s.format.big.BigTableWriter - Writing large partition projects/content:2 (1344898212 bytes)
2016-10-20T11:33:31,395 [[APP-WRAPPER] Proxy 1] INFO c.j.c.service.CassandraServiceMain - =================================

 

 

 

 

So far my take on the problem is that either something gets corrupted or exceeds some max bound(of whatever sort). Because restart does not fix the problem either. GUI(Frontend) just keeps giving errors everywhere, and frontend logs indicate same problem of "NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10040"

But do let me know what else I can do to help troubleshoot the issue. If that matters, some of our projects are very "heavy" we have java files which are thousands(6K etc) lines, and Analysis (PSI) services has always been a challenge.

Also project contains some large text files (like schemas and the like).

I had to make a few custom changes for 3.0.X like upping the size limit on PSI analysis (not sure what that's called), and greatly increasing memory allocation (like 8gig for PSI service, 8 gigs for Cassandra etc)

3.5 seems to be much easier on resources for whatever reason. And I don't see external PSI service anymore (rolled into frontend?) Anyways those are just my external observations

0
Comment actions Permalink




I just checked our current Upsource installation(Build 3.0.4421) logs, and there's definitely some data corruption going on. Although so far we haven't observed visible affects of it. Everything seems to work:

2016-10-17T11:04:17,708 [CompactionExecutor:9] ERROR o.a.c.service.CassandraDaemon - Exception in thread Thread[CompactionExecutor:9,1,main]
org.apache.cassandra.io.FSReadError: org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: C:\Upsource\data\cassandra\data\2xsonarxtrunk\psicache-b07eae40838511e6b6279d38074007b5\la-326-big-Data.db
at org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:358) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:359) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:322) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:132) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:86) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) ~[apache-cassandra-2.2.4.jar:2.2.4]
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) ~[guava-16.0.jar:na]
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) ~[guava-16.0.jar:na]
at org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:169) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202) ~[apache-cassandra-2.2.4.jar:2.2.4]
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) ~[guava-16.0.jar:na]
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) ~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) ~[guava-16.0.jar:na]
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) ~[guava-16.0.jar:na]
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) ~[guava-16.0.jar:na]
at org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:166) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:125) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:136) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.compaction.writers.MaxSSTableSizeWriter.append(MaxSSTableSizeWriter.java:67) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:247) ~[apache-cassandra-2.2.4.jar:2.2.4]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_91]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: C:\Upsource\data\cassandra\data\2xsonarxtrunk\psicache-b07eae40838511e6b6279d38074007b5\la-326-big-Data.db
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferStandard(CompressedRandomAccessReader.java:164) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:241) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.io.compress.CompressedThrottledReader.reBuffer(CompressedThrottledReader.java:44) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:346) ~[apache-cassandra-2.2.4.jar:2.2.4]
... 30 common frames omitted
Caused by: org.apache.cassandra.io.compress.CorruptBlockException: (C:\Upsource\data\cassandra\data\2xsonarxtrunk\psicache-b07eae40838511e6b6279d38074007b5\la-326-big-Data.db): corruption detected, chunk at 37863209 of length 23939.
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferStandard(CompressedRandomAccessReader.java:148) ~[apache-cassandra-2.2.4.jar:2.2.4]
... 33 common frames omitted
2016-10-17T11:04:17,708 [CompactionExecutor:9] ERROR o.a.cassandra.service.StorageService - Stopping gossiper
2016-10-17T11:04:19,728 [CompactionExecutor:9] ERROR o.a.cassandra.service.StorageService - Stopping RPC server
2016-10-17T11:04:19,728 [CompactionExecutor:9] ERROR o.a.cassandra.service.StorageService - Stopping native transport
2016-10-17T11:04:19,774 [CompactionExecutor:9] ERROR o.a.c.service.CassandraDaemon - Exception in thread Thread[CompactionExecutor:9,1,main]
org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: C:\Upsource\data\cassandra\data\2xsonarxtrunk\psicache-b07eae40838511e6b6279d38074007b5\la-326-big-Data.db
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferStandard(CompressedRandomAccessReader.java:164) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:241) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.io.compress.CompressedThrottledReader.reBuffer(CompressedThrottledReader.java:44) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:346) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:359) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:322) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:132) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:86) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) ~[apache-cassandra-2.2.4.jar:2.2.4]
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) ~[guava-16.0.jar:na]
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) ~[guava-16.0.jar:na]
at org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:169) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202) ~[apache-cassandra-2.2.4.jar:2.2.4]
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) ~[guava-16.0.jar:na]
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) ~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) ~[guava-16.0.jar:na]
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) ~[guava-16.0.jar:na]
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) ~[guava-16.0.jar:na]
at org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:166) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:125) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:136) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.compaction.writers.MaxSSTableSizeWriter.append(MaxSSTableSizeWriter.java:67) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:247) ~[apache-cassandra-2.2.4.jar:2.2.4]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_91]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
Caused by: org.apache.cassandra.io.compress.CorruptBlockException: (C:\Upsource\data\cassandra\data\2xsonarxtrunk\psicache-b07eae40838511e6b6279d38074007b5\la-326-big-Data.db): corruption detected, chunk at 37863209 of length 23939.
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferStandard(CompressedRandomAccessReader.java:148) ~[apache-cassandra-2.2.4.jar:2.2.4]
... 33 common frames omitted
2016-10-17T11:04:21,899 [MonitorCassandraServerStop] ERROR c.j.c.service.CassandraServiceMain - Exiting cassandra since listening server has been stopped

 

 

 

 

 

Could be that same problem occurs in 3.5, but it's not able to recover from it?

0
Comment actions Permalink

 



Please let me know if I can provide anything else to assist troubleshooting the problem. Or if there're settings tweaks I could try.

Cheers,

Vadim

0
Comment actions Permalink

Hi Vadim,

Sorry for the delay.

It's 2 different issues in 3.0 and 3.5.

First about 3.5:

It looks you have really huge repo and Upsource(Cassandra) has issues dealing with that. Is it the only big project you have? If not we can try to decrease number of threads for initial indexing.

Also Cassandra becomes rather resource consuming under heavy load and have rather specific behaviour regarding memory usage. Do not set Cassandra heap size for more than 8 GBs, and make sure there is 2-3X memory available on the server (where X is Cassandra XMX), cause Cassandra uses off-heap memory actively.

 

About 3.0:

Correct, DB got corrupted and seems that only restore would help here..

 

As for your question regarding missing services, you are right, all of them were combined into frontend. So in case of issues with code intelligence, frontend service is right one to give more memory. 

0
Comment actions Permalink

Hi Artem,

Cheers for your answers. 

Our repo is indeed large(relatively speaking). And that is just that one project which is big. Others are much smaller relatively speaking.

 

Are there any tweaks I can do to Cassandra? Do you know what is actually going on? from the logs I don't see the root cause of the problems. I think I have same problem with increased XMX or without it..Our server has 32gb of RAM and most of it is used by Upsource..

 

Just in case that is of interest - 32gb is not quite enough sometimes. Upsource (3.0.X) happily uses all of it. Particularly when analysing those large java files or SQL schema files we have in our projects

0
Comment actions Permalink




I just tried a few more things, and it still 3.5 upgrade fails. The server has plenty of memory left when the errors begin, so I'm not sure if its related to memory usage?

This was the first error in the frontend.log after which it never recovers:

[2016-10-25 17:34:05,588] INFO S_CONTENTS_CLUSTER-1 sonar-trunk jetbrains.buildServer.VCS - Processed patch data: 283.78 Mb for VcsRoot: "sonar-trunk" {internal id=1} [null..branches/9240_1|75244]
[2016-10-25 17:34:08,336] INFO PsiIndexerQueue #1 sonar-trunk ndexing.ProjectRevisionIndexer - Indexing revision RevisionId{sonar-trunk, trunk-75307}
[2016-10-25 17:34:18,159] WARN luster1-nio-worker-2 tastaxCqlCluster$MyRetryPolicy - onRequestError. Bound statement: SELECT * FROM content WHERE projectId=? AND rowid IN ?;. Is idempotent. keyspace projects cl QUORUM nbRetry 0
com.datastax.driver.core.exceptions.TransportException: [/127.0.0.1:10040] Connection has been closed
at com.datastax.driver.core.Connection$ConnectionCloseFuture.force(Connection.java:1131)
at com.datastax.driver.core.Connection$ConnectionCloseFuture.force(Connection.java:1116)

 

 

 

Before errors start I see a few of these entries. The patch data size is not very small (again, not sure if that matters)

[2016-10-25 17:29:04,497] INFO S_CONTENTS_CLUSTER-1 sonar-trunk jetbrains.buildServer.VCS - Processed patch data: 1008.63 Mb for VcsRoot: "sonar-trunk" {internal id=1} [null..branches/9218|75150]
[2016-10-25 17:29:19,713] INFO S_CONTENTS_CLUSTER-1 sonar-trunk jetbrains.buildServer.VCS - Processed patch data: 1201.79 Mb for VcsRoot: "sonar-trunk" {internal id=1} [null..branches/9218|75150]

0
Comment actions Permalink



Gday Artem,

 

I did one more "clean" test where I installed a fresh version of Upsource 3.5, upped the frontend XMX to ~15gb and left everything else the same. 

I then created a new project and left upsource to index it. This time it progressed a bit further (I think).

So in essence that was a clean upsource install pointed to our repo.

 

Whist it was indexing I was going around the GUI a bit. Again, not sure if that's a problem or not (just incomplete indexing), but neither code navigation nor analysis was available for indexed revisions. The square on the right hand side/top said there was no content root for this project..

 

Anyway, eventually as I was going through the GUI, same error occured again.

As before, there's nothing in the Cassandra logs that indicate a problem.

 

Frontend's (stderr.log) has the following:

[2016-10-26 12:27:27,119] [Upsource Frontend Error] [2016-10-26 12:27:27,119] WARN Netty worker group-9 .channel.ChannelOutboundBuffer - Failed to mark a promise as success because it has succeeded already: DefaultChannelPromise@b5e3252(success)
[2016-10-26 12:30:07,077] [Upsource Frontend Error] [2016-10-26 12:30:07,076] WARN luster1-nio-worker-2 tastaxCqlCluster$MyRetryPolicy - onRequestError. Bound statement: SELECT * FROM content WHERE projectId=? AND rowid IN ?;. Is idempotent. keyspace projects cl QUORUM nbRetry 0
[2016-10-26 12:30:07,077] [Upsource Frontend Error] com.datastax.driver.core.exceptions.TransportException: [/127.0.0.1:10030] Connection has been closed
[2016-10-26 12:30:07,077] [Upsource Frontend Error] at com.datastax.driver.core.Connection$ConnectionCloseFuture.force(Connection.java:1131)
[2016-10-26 12:30:07,077] [Upsource Frontend Error] at com.datastax.driver.core.Connection$ConnectionCloseFuture.force(Connection.java:1116)
[2016-10-26 12:30:07,077] [Upsource Frontend Error] at com.datastax.driver.core.Connection.defunct(Connection.java:424)
[2016-10-26 12:30:07,077] [Upsource Frontend Error] at com.datastax.driver.core.Connection$Dispatcher.exceptionCaught(Connection.java:1049)
[2016-10-26 12:30:07,077] [Upsource Frontend Error] at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:273)

 

 

 

Frontend's error.log has the following:

[2016-10-26 12:30:07,094] ERROR lCluster executor #4 impl.DatabaseFlushExecutorImpl - Error writing to db 1-sonar-trunk
java.lang.RuntimeException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.jetbrains.upsource.database.datastax.WriteExecutor.execute(WriteExecutor.java:48)
at com.jetbrains.upsource.database.datastax.ThroughDbMediator.submitSync(ThroughDbMediator.java:58)
at com.jetbrains.upsource.database.datastax.buffering.CqlWriteBuffer.flush(CqlWriteBuffer.java:194)
at com.jetbrains.upsource.database.datastax.buffering.CqlBufferedQueue$StorageChunk.flush(CqlBufferedQueue.java:78)
at com.jetbrains.upsource.db.impl.AbstractBufferingWriter.flushChunk(AbstractBufferingWriter.java:143)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.a(DatabaseFlushExecutorImpl.java:253)
at __.db_1-sonar-trunk.__(JavaGeneratorTemplate.java:44)
at org.jonnyzzz.stack.NamedStackFrame.frame(NamedStackFrame.java:48)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.d(DatabaseFlushExecutorImpl.java:252)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.run(DatabaseFlushExecutorImpl.java:235)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.datastax.driver.core.WriteSessionWrapper.execute(WriteSessionWrapper.java:50)
at com.jetbrains.upsource.database.datastax.buffering.CqlWriteBuffer.lambda$null$19(CqlWriteBuffer.java:226)
at com.jetbrains.upsource.database.datastax.WriteExecutor.execute(WriteExecutor.java:31)
... 10 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:208)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:274)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.run(RequestHandler.java:429)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
[2016-10-26 12:30:07,094] ERROR CHANGELIST_CLUSTER-2 sonar-trunk eImpl$DriverWrapperWithLogging - Error while analyzing 'infer-changes' MAIN-{RevisionId{sonar-trunk, branches/8729-69405} @2016-Feb-12 16:08:39}-cluster=VCS_CHANGELIST_CLUSTER direction OLD. null
java.util.concurrent.CancellationException
at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2263)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.a(DatabaseFlushExecutorImpl.java:310)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.b(DatabaseFlushExecutorImpl.java:284)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.d(DatabaseFlushExecutorImpl.java:242)
at com.jetbrains.upsource.db.impl.DatabaseFlushExecutorImpl$Executor.run(DatabaseFlushExecutorImpl.java:235)
at java.lang.Thread.run(Thread.java:745)
[2016-10-26 12:30:07,096] ERROR CHANGELIST_CLUSTER-2 sonar-trunk eImpl$DriverWrapperWithLogging - Error while analyzing 'infer-changes' MAIN-{RevisionId{sonar-trunk, branches/8723-69334} @2016-Feb-10 11:35:19}-cluster=VCS_CHANGELIST_CLUSTER direction OLD. All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.jetbrains.upsource.database.datastax.ReadExecutor.runReadQuery(ReadExecutor.java:34)
at com.jetbrains.upsource.database.datastax.ThroughDbMediator.runReadQuery(ThroughDbMediator.java:75)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.getValue(CqlDynamicTable.java:61)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.a(DynamicBufferingTable.java:72)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:49)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.getValue(DynamicBufferingTable.java:63)
at com.jetbrains.upsource.messaging.cache.DynamicDistributedCacheTable.getValue(DynamicDistributedCacheTable.java:248)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeLoader.getReferenceRowId(DbFileTreeLoader.java:289)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeLoader.loadFileTree(DbFileTreeLoader.java:94)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeLoader.a(DbFileTreeLoader.java:160)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeLoader.loadFileTree(DbFileTreeLoader.java:96)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.getTree(ImportProjectInferChanges.java:95)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.analyze(ImportProjectInferChanges.java:128)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.analyze(ImportProjectInferChanges.java:108)
at com.jetbrains.upsource.backend.cli.stages.driver.DriverWrapper.analyze(DriverWrapper.java:78)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:79)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:51)
at com.jetbrains.upsource.lifetimes.LifetimeImpl.runSync(LifetimeImpl.java:76)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.indexTasks(IndexerPipelineRunner.java:49)
at com.jetbrains.upsource.backend.cli.multi.revisions.DriverClusterPipelineImpl.a(DriverClusterPipelineImpl.java:124)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$2.run(MultiProjectTaskFactory.java:176)
at __.project_sonar-trunk.__(JavaGeneratorTemplate.java:44)
at org.jonnyzzz.stack.NamedStackFrame.frame(NamedStackFrame.java:48)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor.executeTask(MultiProjectTaskFactory.java:173)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$1.run(MultiProjectTaskFactory.java:154)
at com.jetbrains.upsource.backend.cli.multi.executor.ProjectSyncExecutor$ProjectTasks$1$1.run(ProjectSyncExecutor.java:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.jetbrains.upsource.util.NamedDaemonThreadFactory.a(NamedDaemonThreadFactory.java:34)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:208)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:274)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:112)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:92)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:136)
... 31 more
[2016-10-26 12:30:07,140] ERROR CHANGELIST_CLUSTER-2 sonar-trunk .tasks.MultiProjectTaskFactory - Failed to execute: VCS_CHANGELIST_CLUSTER@3145[[3146]]. All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.jetbrains.upsource.database.datastax.ReadExecutor.runReadQuery(ReadExecutor.java:34)
at com.jetbrains.upsource.database.datastax.ThroughDbMediator.runReadQuery(ThroughDbMediator.java:75)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.doIterateRows(CqlDynamicTable.java:162)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.iterateRows(CqlDynamicTable.java:119)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.iterateRow(CqlDynamicTable.java:114)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.a(DynamicBufferingTable.java:114)
at com.jetbrains.upsource.stats.StatsAccumulator.a(StatsAccumulator.java:41)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:49)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:40)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.iterateRow(DynamicBufferingTable.java:110)
at com.jetbrains.upsource.backend.server.core.db.ProjectHeadsTable.getHeadsMap(ProjectHeadsTable.kt:27)
at com.jetbrains.upsource.backend.server.core.db.ProjectHeadsTable.getInitialHeadsMap(ProjectHeadsTable.kt:19)
at com.jetbrains.upsource.backend.cli.multi.revisions.RevisionsGraphForAnalyzer.getRevisionType(RevisionsGraphForAnalyzer.java:103)
at com.jetbrains.upsource.backend.cli.multi.revisions.RevisionTaskImpl.getRevisionType(RevisionTaskImpl.java:44)
at com.jetbrains.upsource.backend.cli.stages.driver.DriverWrapper.analyze(DriverWrapper.java:99)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:79)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:51)
at com.jetbrains.upsource.lifetimes.LifetimeImpl.runSync(LifetimeImpl.java:76)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.indexTasks(IndexerPipelineRunner.java:49)
at com.jetbrains.upsource.backend.cli.multi.revisions.DriverClusterPipelineImpl.a(DriverClusterPipelineImpl.java:112)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$2.run(MultiProjectTaskFactory.java:176)
at __.project_sonar-trunk.__(JavaGeneratorTemplate.java:44)
at org.jonnyzzz.stack.NamedStackFrame.frame(NamedStackFrame.java:48)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor.executeTask(MultiProjectTaskFactory.java:173)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$1.run(MultiProjectTaskFactory.java:154)
at com.jetbrains.upsource.backend.cli.multi.executor.ProjectSyncExecutor$ProjectTasks$1$1.run(ProjectSyncExecutor.java:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.jetbrains.upsource.util.NamedDaemonThreadFactory.a(NamedDaemonThreadFactory.java:34)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:208)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:274)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:112)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:92)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:136)
... 31 more
[2016-10-26 12:30:07,140] ERROR CHANGELIST_CLUSTER-2 sonar-trunk eImpl$DriverWrapperWithLogging - Error while analyzing 'infer-changes' MAIN-{RevisionId{sonar-trunk, branches/8723-69331} @2016-Feb-10 11:14:33}-cluster=VCS_CHANGELIST_CLUSTER direction OLD. All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.jetbrains.upsource.database.datastax.ReadExecutor.runReadQuery(ReadExecutor.java:34)
at com.jetbrains.upsource.database.datastax.ThroughDbMediator.runReadQuery(ThroughDbMediator.java:75)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.getValue(CqlDynamicTable.java:61)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.a(DynamicBufferingTable.java:72)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:49)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.getValue(DynamicBufferingTable.java:63)
at com.jetbrains.upsource.messaging.cache.DynamicDistributedCacheTable.getValue(DynamicDistributedCacheTable.java:248)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeLoader.getReferenceRowId(DbFileTreeLoader.java:289)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeLoader.loadFileTree(DbFileTreeLoader.java:94)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.getTree(ImportProjectInferChanges.java:95)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.analyze(ImportProjectInferChanges.java:128)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.analyze(ImportProjectInferChanges.java:108)
at com.jetbrains.upsource.backend.cli.stages.driver.DriverWrapper.analyze(DriverWrapper.java:78)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:79)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:51)
at com.jetbrains.upsource.lifetimes.LifetimeImpl.runSync(LifetimeImpl.java:76)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.indexTasks(IndexerPipelineRunner.java:49)
at com.jetbrains.upsource.backend.cli.multi.revisions.DriverClusterPipelineImpl.a(DriverClusterPipelineImpl.java:124)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$2.run(MultiProjectTaskFactory.java:176)
at __.project_sonar-trunk.__(JavaGeneratorTemplate.java:44)
at org.jonnyzzz.stack.NamedStackFrame.frame(NamedStackFrame.java:48)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor.executeTask(MultiProjectTaskFactory.java:173)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$1.run(MultiProjectTaskFactory.java:154)
at com.jetbrains.upsource.backend.cli.multi.executor.ProjectSyncExecutor$ProjectTasks$1$1.run(ProjectSyncExecutor.java:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.jetbrains.upsource.util.NamedDaemonThreadFactory.a(NamedDaemonThreadFactory.java:34)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:208)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:274)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:112)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:92)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:136)
... 29 more
[2016-10-26 12:30:07,140] ERROR CHANGELIST_CLUSTER-2 sonar-trunk eImpl$DriverWrapperWithLogging - Error while analyzing 'infer-changes' MAIN-{RevisionId{sonar-trunk, branches/8683_1-69313} @2016-Feb-09 15:27:26}-cluster=VCS_CHANGELIST_CLUSTER direction OLD. All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.jetbrains.upsource.database.datastax.ReadExecutor.runReadQuery(ReadExecutor.java:34)
at com.jetbrains.upsource.database.datastax.ThroughDbMediator.runReadQuery(ThroughDbMediator.java:75)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.doIterateRows(CqlDynamicTable.java:162)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.iterateRows(CqlDynamicTable.java:119)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.iterateRow(CqlDynamicTable.java:114)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.a(DynamicBufferingTable.java:114)
at com.jetbrains.upsource.stats.StatsAccumulator.a(StatsAccumulator.java:41)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:49)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:40)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.iterateRow(DynamicBufferingTable.java:110)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeSerializationUtil.a(DbFileTreeSerializationUtil.java:84)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:49)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeSerializationUtil.loadFileTree(DbFileTreeSerializationUtil.java:82)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeLoader.loadFileTree(DbFileTreeLoader.java:105)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeLoader.a(DbFileTreeLoader.java:160)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeLoader.loadFileTree(DbFileTreeLoader.java:96)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.getTree(ImportProjectInferChanges.java:95)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.analyze(ImportProjectInferChanges.java:128)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.analyze(ImportProjectInferChanges.java:108)
at com.jetbrains.upsource.backend.cli.stages.driver.DriverWrapper.analyze(DriverWrapper.java:78)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:79)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:51)
at com.jetbrains.upsource.lifetimes.LifetimeImpl.runSync(LifetimeImpl.java:76)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.indexTasks(IndexerPipelineRunner.java:49)
at com.jetbrains.upsource.backend.cli.multi.revisions.DriverClusterPipelineImpl.a(DriverClusterPipelineImpl.java:124)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$2.run(MultiProjectTaskFactory.java:176)
at __.project_sonar-trunk.__(JavaGeneratorTemplate.java:44)
at org.jonnyzzz.stack.NamedStackFrame.frame(NamedStackFrame.java:48)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor.executeTask(MultiProjectTaskFactory.java:173)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$1.run(MultiProjectTaskFactory.java:154)
at com.jetbrains.upsource.backend.cli.multi.executor.ProjectSyncExecutor$ProjectTasks$1$1.run(ProjectSyncExecutor.java:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.jetbrains.upsource.util.NamedDaemonThreadFactory.a(NamedDaemonThreadFactory.java:34)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:208)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:274)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:112)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:92)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:136)
... 36 more
[2016-10-26 12:30:07,141] ERROR CHANGELIST_CLUSTER-2 sonar-trunk .tasks.MultiProjectTaskFactory - Failed to execute: VCS_CHANGELIST_CLUSTER@37[[36]]. All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.jetbrains.upsource.database.datastax.ReadExecutor.runReadQuery(ReadExecutor.java:34)
at com.jetbrains.upsource.database.datastax.ThroughDbMediator.runReadQuery(ThroughDbMediator.java:75)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.doIterateRows(CqlDynamicTable.java:162)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.iterateRows(CqlDynamicTable.java:119)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.iterateRow(CqlDynamicTable.java:114)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.a(DynamicBufferingTable.java:114)
at com.jetbrains.upsource.stats.StatsAccumulator.a(StatsAccumulator.java:41)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:49)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:40)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.iterateRow(DynamicBufferingTable.java:110)
at com.jetbrains.upsource.backend.server.core.db.ProjectHeadsTable.getHeadsMap(ProjectHeadsTable.kt:27)
at com.jetbrains.upsource.backend.server.core.db.ProjectHeadsTable.getInitialHeadsMap(ProjectHeadsTable.kt:19)
at com.jetbrains.upsource.backend.cli.multi.revisions.RevisionsGraphForAnalyzer.getRevisionType(RevisionsGraphForAnalyzer.java:103)
at com.jetbrains.upsource.backend.cli.multi.revisions.RevisionTaskImpl.getRevisionType(RevisionTaskImpl.java:44)
at com.jetbrains.upsource.backend.cli.stages.driver.DriverWrapper.analyze(DriverWrapper.java:99)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:79)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:51)
at com.jetbrains.upsource.lifetimes.LifetimeImpl.runSync(LifetimeImpl.java:76)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.indexTasks(IndexerPipelineRunner.java:49)
at com.jetbrains.upsource.backend.cli.multi.revisions.DriverClusterPipelineImpl.a(DriverClusterPipelineImpl.java:112)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$2.run(MultiProjectTaskFactory.java:176)
at __.project_sonar-trunk.__(JavaGeneratorTemplate.java:44)
at org.jonnyzzz.stack.NamedStackFrame.frame(NamedStackFrame.java:48)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor.executeTask(MultiProjectTaskFactory.java:173)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$1.run(MultiProjectTaskFactory.java:154)
at com.jetbrains.upsource.backend.cli.multi.executor.ProjectSyncExecutor$ProjectTasks$1$1.run(ProjectSyncExecutor.java:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.jetbrains.upsource.util.NamedDaemonThreadFactory.a(NamedDaemonThreadFactory.java:34)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:208)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:274)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:112)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:92)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:136)
... 31 more
[2016-10-26 12:30:07,141] ERROR CHANGELIST_CLUSTER-2 sonar-trunk eImpl$DriverWrapperWithLogging - Error while analyzing 'infer-changes' MAIN-{RevisionId{sonar-trunk, branches/8683_1-69295} @2016-Feb-08 14:29:03}-cluster=VCS_CHANGELIST_CLUSTER direction OLD. All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.jetbrains.upsource.database.datastax.ReadExecutor.runReadQuery(ReadExecutor.java:34)
at com.jetbrains.upsource.database.datastax.ThroughDbMediator.runReadQuery(ThroughDbMediator.java:75)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.doIterateRows(CqlDynamicTable.java:162)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.iterateRows(CqlDynamicTable.java:119)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.iterateRow(CqlDynamicTable.java:114)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.a(DynamicBufferingTable.java:114)
at com.jetbrains.upsource.stats.StatsAccumulator.a(StatsAccumulator.java:41)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:49)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:40)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.iterateRow(DynamicBufferingTable.java:110)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeSerializationUtil.a(DbFileTreeSerializationUtil.java:84)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:49)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeSerializationUtil.loadFileTree(DbFileTreeSerializationUtil.java:82)
at com.jetbrains.upsource.backend.server.core.tree.DbFileTreeLoader.loadFileTree(DbFileTreeLoader.java:105)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.getTree(ImportProjectInferChanges.java:95)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.analyze(ImportProjectInferChanges.java:128)
at com.jetbrains.upsource.backend.cli.stages.driver.impl.ImportProjectInferChanges.analyze(ImportProjectInferChanges.java:108)
at com.jetbrains.upsource.backend.cli.stages.driver.DriverWrapper.analyze(DriverWrapper.java:78)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:79)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:51)
at com.jetbrains.upsource.lifetimes.LifetimeImpl.runSync(LifetimeImpl.java:76)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.indexTasks(IndexerPipelineRunner.java:49)
at com.jetbrains.upsource.backend.cli.multi.revisions.DriverClusterPipelineImpl.a(DriverClusterPipelineImpl.java:124)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$2.run(MultiProjectTaskFactory.java:176)
at __.project_sonar-trunk.__(JavaGeneratorTemplate.java:44)
at org.jonnyzzz.stack.NamedStackFrame.frame(NamedStackFrame.java:48)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor.executeTask(MultiProjectTaskFactory.java:173)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$1.run(MultiProjectTaskFactory.java:154)
at com.jetbrains.upsource.backend.cli.multi.executor.ProjectSyncExecutor$ProjectTasks$1$1.run(ProjectSyncExecutor.java:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.jetbrains.upsource.util.NamedDaemonThreadFactory.a(NamedDaemonThreadFactory.java:34)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:208)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:274)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:112)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:92)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:136)
... 34 more
[2016-10-26 12:30:07,141] ERROR CHANGELIST_CLUSTER-2 sonar-trunk .tasks.MultiProjectTaskFactory - Failed to execute: VCS_CHANGELIST_CLUSTER@36[[35]]. All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.jetbrains.upsource.database.datastax.ReadExecutor.runReadQuery(ReadExecutor.java:34)
at com.jetbrains.upsource.database.datastax.ThroughDbMediator.runReadQuery(ThroughDbMediator.java:75)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.doIterateRows(CqlDynamicTable.java:162)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.iterateRows(CqlDynamicTable.java:119)
at com.jetbrains.upsource.database.datastax.CqlDynamicTable.iterateRow(CqlDynamicTable.java:114)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.a(DynamicBufferingTable.java:114)
at com.jetbrains.upsource.stats.StatsAccumulator.a(StatsAccumulator.java:41)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:49)
at com.jetbrains.upsource.stats.StatsAccumulator.decorate(StatsAccumulator.java:40)
at com.jetbrains.upsource.db.impl.DynamicBufferingTable.iterateRow(DynamicBufferingTable.java:110)
at com.jetbrains.upsource.backend.server.core.db.ProjectHeadsTable.getHeadsMap(ProjectHeadsTable.kt:27)
at com.jetbrains.upsource.backend.server.core.db.ProjectHeadsTable.getInitialHeadsMap(ProjectHeadsTable.kt:19)
at com.jetbrains.upsource.backend.cli.multi.revisions.RevisionsGraphForAnalyzer.getRevisionType(RevisionsGraphForAnalyzer.java:103)
at com.jetbrains.upsource.backend.cli.multi.revisions.RevisionTaskImpl.getRevisionType(RevisionTaskImpl.java:44)
at com.jetbrains.upsource.backend.cli.stages.driver.DriverWrapper.analyze(DriverWrapper.java:99)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:79)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.a(IndexerPipelineRunner.java:51)
at com.jetbrains.upsource.lifetimes.LifetimeImpl.runSync(LifetimeImpl.java:76)
at com.jetbrains.upsource.backend.cli.stages.driver.IndexerPipelineRunner.indexTasks(IndexerPipelineRunner.java:49)
at com.jetbrains.upsource.backend.cli.multi.revisions.DriverClusterPipelineImpl.a(DriverClusterPipelineImpl.java:112)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$2.run(MultiProjectTaskFactory.java:176)
at __.project_sonar-trunk.__(JavaGeneratorTemplate.java:44)
at org.jonnyzzz.stack.NamedStackFrame.frame(NamedStackFrame.java:48)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor.executeTask(MultiProjectTaskFactory.java:173)
at com.jetbrains.upsource.backend.cli.multi.tasks.MultiProjectTaskFactory$TaskExecutor$1.run(MultiProjectTaskFactory.java:154)
at com.jetbrains.upsource.backend.cli.multi.executor.ProjectSyncExecutor$ProjectTasks$1$1.run(ProjectSyncExecutor.java:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.jetbrains.upsource.util.NamedDaemonThreadFactory.a(NamedDaemonThreadFactory.java:34)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:208)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:274)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:112)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:92)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:136)
... 31 more
[2016-10-26 12:30:07,141] ERROR CHANGELIST_CLUSTER-2 sonar-trunk eImpl$DriverWrapperWithLogging - Error while analyzing 'infer-changes' MAIN-{RevisionId{sonar-trunk, branches/8683_1-69290} @2016-Feb-08 10:38:42}-cluster=VCS_CHANGELIST_CLUSTER direction OLD. All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:10030 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)

AND MUCH MORE AFTER THAT :)

 

 

 

 

At this stage I don't see what else I can try.. 
Looking forward to your suggestions

 

0
Comment actions Permalink

Hi Vadim,

To be honest out of ideas for now. Please send us full Upsource logs from your latest installation to upsource-support at jetbrains.com

It's really hard to read it in forum comment ;)

Thank you in advance.

0
Comment actions Permalink

Hello, 

I found your discussion here, and I would like to join. I faced with the same issue:

Internal error: All host(s) tried for query failed (tried: /127.0.0.1:10040 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))

 

The logs can be downloaded here - https://mega.nz/#!ZEA2nTRK!CeG5cr7k6syBR_jFKuThO5OvqFKPFFPiMQKo2m-neaA (I could not uploaded them here - I got always error that my files is an image and the size is more than 2 mb.).

I'm looking forward for your answer. Because of this issue, we cannot update the upsource server :(

 

Best regards, Sergii Sydorenko

0

Please sign in to leave a comment.