Last year MongoDB released v3.2 of their database, with the same storage engine that has been released in v3.0 WiredTiger. As known, WiredTiger performs document-level locking during writes touting to offer a huge improvement over locks on the entire database or collection in the previous 2.6 version.
We did many researches on storage engine selection, especially in the comparison between Percona Mongo32-Server that supports multiple storage engines such as InMemory,RocksDB,PerconaFS and the official Mongo3.2 that supports InMemory and WiredTiger.
Here is a sample result from the our YCSB (Yahoo! Cloud Serving Benchmark) trails for insert/read operations.
As seen in the two benchmarks above, the best choice was InMemory and WiredTiger in the 2nd place. But we decided to use WiredTiger from Mongo-Org, since we can not afford the mount of RAM that holds the database using the InMemory storage engine.
How we moved 2.6 Cluster to 3.2 Replica Set with a minimum Down Time ?
Our old cluster looked like this complicated diagram :
But because we are lucky, we only had 1 shard at that time, 3 mongoS instances, 3 mongoC instances and 3 mongoD instances.
First trial of migration was to add a new secondary with version 3.2 to the Cluster, but all mongoS instances kept restarting their services in a crazy way without any clear logs, which made the database not available for the application so we directly stopped this method.
Second trial was to downgrade from mongo2.6 cluster to a normal ReplicaSet. Then change it to 3.2 ReplicaSet. But here we face 2 main challenges :
- The application was connecting to mongoS port not mongoD.
- MongoD was configured with a security key that is only provided to mongoS, and clients (the application) does not have since it’s using username/password to authenticate with mongoS.
If you removed the key from mongoD and restart it, mogoS will not be able to connect to it so are the clients.
The only solution was by :
- Stopping the application for 30 seconds
- Changing the connection string in the application to point to mongoD.
- Remove the mongoD key configuration and restart it.
- Starting the application again.
And it worked 🙂
Now moving from 2.6 ReplicaSet to 3.2 ReplicaSet was the easy part. We have added 3 instances of mongo, and added them to the old replicaSet.
Once all of them synced all the data:
- We added them to the database connection string in the application and reload it.
- Then freezed the 2 old secondaries and stepDown the old primary, so a new primary from the new instances has been chosen to be the new primary.
- Remove old instances from the connection string, then we reloaded the application.
- Remove old instances from the replicatSet using rs.remove()
And all we have now are only mongo 3.2 instances.