upsource:2018.1.357 upgrade to upsource:2018.1.584 failing

I get my upgrade URL and am able to successfully start the upgrade however after that I get into a failure loop where it cant start.

 

[Upsource Error] [2018-08-08 16:44:14,909] ERROR - bundle.startup - Error while starting JetBrains Upsource 2018.1: Service hub was started successfully, but shut down process has been initiated

 

Unfortunately it's the only error I see in the log with exception to this one that happens first:

[Upsource Error] ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging.

 

 

 

1 comment
Comment actions Permalink

We can scratch this one.. It was a change I introduced around liveness checks in Kubernetes. Since you are not returning a 200 during upgrades the check fails and kills the container. 

 

I had to change the initial delay :

         readinessProbe:
          httpGet:
            path: /
            port: 8080
          periodSeconds: 10
          initialDelaySeconds: 15
          timeoutSeconds: 5
          failureThreshold: 5
        livenessProbe:
          httpGet:
            path: /
            port: 8080
          periodSeconds: 10
          initialDelaySeconds: 600
          timeoutSeconds: 5
          failureThreshold: 5

This isn't ideal I do wish a 200 was sent back during upgrades

 

 

 

0

Please sign in to leave a comment.