From the frontline: Upgrading the Octopus Deploy Server

Note: I’ve had this post ready for months but completely forgot about it. It’s about four months out of date, but still interesting so I’ve decided to post it anyway.
We’ve been planning on upgrading for months (since late 2014), but with a jam packed schedule it was difficult to fit in. On top of that, we weren’t sure which version to jump to: 2.5.12 or 2.6. We’ve been running on Octopus Deploy 2.5.4 since late July 2014 and while it’s been “working”, we were long overdue for an update. We should’ve upgraded in October or November but we just had too much going on. From performance updates and some other patches that we’ve needed for months, we finally pushed forth.
The upgrade process within Octopus is relatively straightforward, however there are several factors to take into account. Some of these factors we were aware of but others we weren’t.

  • Shutting down – before our upgrade, shutting down Octopus Deploy took approximately 5-8 minutes. It’s understandable given all the communication and resources that Octopus Deploy manages – but for our upgrade, we were waiting over 40 minutes. Mind you this was BEFORE we performed the upgrade. We decided to “end task” on the Octopus Server Service which isn’t something I want to keep doing. This was totally unexpected – we never had an issue before.
  • RavenDB size – this matters in terms for the time required to re-index RavenDB. When you upgrade Octopus Deploy, immediately after it’s done, RavenDB starts re-indexing. Given our production database is over 800Mb (which probably isn’t too large), the amount of time to re-index can take a while. Our tests showed over 2 hours (using 2 Core/4GB RAM server). My understanding of the RavenDB embedded license (under ISV, select “embedded”) allows for a maximum of 3 CPU cores and 6GB of RAM. From a processing perspective, this sucks for us because we have 10 cores available. That said, RavenDB is fairly intelligent when it comes to re-indexing. For our tests on smaller servers, you can see some of the RavenDB settings/statuses while the re-indexing is happening.
    upgrade-ravendb-indexes
    RavenDB status panel during re-indexing

    Statistically, there’s a lot of interesting things going on in RavenDB, but most interesting was the “CurrentNumberOfItemsToIndexInSingleBatch“. This number actually fluctuated during the re-indexing which is something I didn’t expect. Our initial upgrade tests were done on a less powerful machine (2 cores 4GB RAM) and when the re-indexing was happening, the “items to index in a single batch” was set to 1024. The “items to reduce in a single batch” was 512. Understandably, the machine had less resources to work with during our tests but as our re-indexing was firing up on production, we could see the memory growing substantially as the process was underway. At it’s peak, RavenDB was batching 65,536 items.
    upgrade-ravendb
    Increase in Index in Single Batch parameter
  • At nearly 11GB of memory, RavenDB was utilizing what it could for re-indexing – which is great. I’d much rather use the resources available than stick to hard-coded default values. Our 3 cores were chugging along and while the RAM seemed higher than the limit of 6GB. The total indexing time was around 2 hours so some time was definitely shaved off by RavenDB. I expected this I just wasn’t sure how much faster the re-indexing would be.
    upgrade-resources
    Memory during RavenDB re-indexing
  • Upgrading tentacles – with 700+ tentacles, updating all of them even in batches is bound to have some hiccups. Hitting the “Upgrade all tentacles” button no longer works completely by magic (and rightfully so). Our upgrades from 2.0 all the way up to 2.5.4 worked very well. But today, the number of tentacles we have easily trounces the number of tentacles we were upgrading from back in July. Looking into the logs of the task, the maximum number of tentacles that can be done in a batch is set to 10. Well, 700/10 = 70. 70 x (time to install/restart service) = total time. Seems easy enough.

While our upgrade to 2.5.12 was successful, there’s more to this tale of the upgrade. I’ll leave that for another post. I figured from an upgrading perspective, it’s interesting to note what things look like during the upgrade process. While the Octopus documentation covers the steps to upgrading within Octopus 2.0, there’s not a lot of detail of what happens.
 

ianpaullin

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment