MongoDB : Sharding

Sharding

Mongodb sharding is based on shard key.

K1 -> k2 on shard1
K2 -> k3 on shard2 etc..

Each shard is then replicated for higher availability and DR etc..Sharding is therefore range based. Sharding is done per collections basis.Range based sharding helps it do range based queries.

All of the documents on a particular shard are known as chunks ~~ 100mb.

There are two operations that happen in the background in sharding and these are done automatically for us.

  1.  Split – splits a range if the range is producing bigger chunks, this is fairly expensive
  2. Migrate – moves chunks to somewhere else in the cluster, this is somewhat expensive.
    • Between a pair of shards there will not be more than one migration activity.
    • We can still read and write from the data when we are migrating. So, it is live.
    • “Balancer” decides when to do the balancing. It balances on the number of chunks today.

Both of these are done to maintain a balance in the shards w.r.t. the documents.
The metadata about these shards and our system is stored in config servers. These are light weight.
Conceptually these shards are processes and not separate physical machines or virtual machines although they can and most likely will be.

Mongos gives the client the big picture of the whole setup. Client is therefore insulated from the underlying architecture that is used to implement sharding, replication etc..

ShardingSharding

End client applications should go through mongos.

To create a shard connect to mongos

Then

sh.addShard("hostname:port")
sh.enableSharding("dbname")
db.ensureIndex(Key pattern for your shard key)
sh.shardCollection("namespaceforyourCollection",shardkey);

Choosing a shard key

– Filed should be involved in most of the queries
– Good cardinality/granularity
– Shard key should not increase monotonically

Advertisements

MongoDB – Replication

Replication

Replication helps us achieve availability and fault-tolerance. A replica set is a set of mongo nodes that replicate data amongst each other asynchronously. One of the replica sets is primary while the rest of them will be secondary.
Writes only happen to the primary. If the primary goes down then an election happens and the new primary comes up.
Minimum number of nodes will be 3, since the election requires a majority of the original set.
If there were only 2 sets then the remaining one is not a majority and you would not be able to write.

Replica Set Elections

Regular Node : It has the data and can become primary or secondary.
Arbiter : It is just there for voting purposes. We need it if we want an even number of nodes.
It has no data on it.
Delayed/Regular : It can be set to a few hours after the nodes. It cannot become primary. It’s priority is set to 0.
Hidden Node : It cannot become a primary node. It’s priority is set to 0.

By default the reads and writes go to the primary. You can go to secondary for reading. This means that you might read stale data. The lag between nodes is not guaranteed since the process is async. If you read from secondary then what we have is “eventual consistency”.

rs.slaveOk() -- ok to read from the secondary
rs.isMaster() -- checks whether the node is master
rs.status() -- gives the current status of the replica set.

In the database locals , the collection oplogs.rs has all the operational logs. Replication happens when secondary nodes query the primary for the changes from a given timestamp. OpLog is the statement based replication log.
Replication happens in a statement driven manner?
e.g If a statement deletes 100 documents on the primary then there will 100 statements that are sent to the secondary to execute. There is no binary replication. This allows us to run different version of mongodb on different machines.

  • Try to keep the OpLog small on 64 bit machine since it defaults to a large value on 64 bit systems.
  • For replica sets don’t use localhost or the ip address of the machine.
  • Use a logical name, that is the best practice.
  • Use DNS.
  • Pick appropriate TTL as well.

MongoDB- Understanding your queries through ExplainPlan

Understanding the queries we write is very critical and MongoDB does a good job here. Developers will find it easy to understand what the queries are doing and where to look for bottlenecks. Well defined parameters and also well documented ones make life a lot easier.
The details have been taken from the mongodb website and presented here for continuity of series.

Explain Output Fields

explain.cursor
cursor is a string that reports the type of cursor used by the query operation:

BasicCursor indicates a full collection scan.
BtreeCursor indicates that the query used an index. The cursor includes name of the index. When a query uses an index, the output of explain() includes indexBounds details.
GeoSearchCursor indicates that the query used a geospatial index.
explain.isMultiKey
isMultiKey is a boolean. When true, the query uses a multikey index, where one of the fields in the index holds an array.

explain.n
n is a number that reflects the number of documents that match the query selection criteria.

explain.nscannedObjects
Specifies the total number of documents scanned during the query. The nscannedObjects may be lower than nscanned, such as if the index covers a query. See indexOnly. Additionally, the nscannedObjects may be lower than nscanned in the case of multikey index on an array field with duplicate documents.

explain.nscanned
Specifies the total number of documents or index entries scanned during the database operation. You want n and nscanned to be close in value as possible. The nscanned value may be higher than the nscannedObjects value, such as if the index covers a query. See indexOnly.

explain.nscannedObjectsAllPlans
nscannedObjectsAllPlans is a number that reflects the total number of documents scanned for all query plans during the database operation.

explain.nscannedAllPlans
nscannedAllPlans is a number that reflects the total number of documents or index entries scanned for all query plans during the database operation.

explain.scanAndOrder
scanAndOrder is a boolean that is true when the query cannot use the index for returning sorted results.

When true, MongoDB must sort the documents after it retrieves them from either an index cursor or a cursor that scans the entire collection.

explain.indexOnly
indexOnly is a boolean value that returns true when the query is covered by the index indicated in the cursor field. When an index covers a query, MongoDB can both match the query conditions and return the results using only the index because:

all the fields in the query are part of that index, and
all the fields returned in the results set are in the same index.
explain.nYields
nYields is a number that reflects the number of times this query yielded the read lock to allow waiting writes execute.

explain.nChunkSkips
nChunkSkips is a number that reflects the number of documents skipped because of active chunk migrations in a sharded system. Typically this will be zero. A number greater than zero is ok, but indicates a little bit of inefficiency.

explain.millis
millis is a number that reflects the time in milliseconds to complete the query.

explain.indexBounds
indexBounds is a document that contains the lower and upper index key bounds. This field resembles one of the following:

“indexBounds” : {
“start” : { : , … },
“end” : { : , … }
},
“indexBounds” : { “” : [ [ , ] ],

}
explain.allPlans
allPlans is an array that holds the list of plans the query optimizer runs in order to select the index for the query. Displays only when the parameter to explain() is true or 1.

explain.oldPlan
oldPlan is a document value that contains the previous plan selected by the query optimizer for the query. Displays only when the parameter to explain() is true or 1.

explain.server
server is a string that reports the MongoDB server.

$or Query Output Fields
explain.clauses
clauses is an array that holds the Core Explain Output Fields information for each clause of the $or expression. clauses is only included when the clauses in the $or expression use indexes.

Sharded Collections Output Fields
explain.clusteredType
clusteredType is a string that reports the access pattern for shards. The value is:

ParallelSort, if the mongos queries shards in parallel.
SerialServer, if the mongos queries shards sequentially.
explain.shards
shards contains fields for each shard in the cluster accessed during the query. Each field holds the Core Explain Output Fields for that shard.

explain.millisShardTotal
millisShardTotal is a number that reports the total time in milliseconds for the query to run on the shards.

explain.millisShardAvg
millisShardAvg is a number that reports the average time in millisecond for the query to run on each shard.

explain.numQueries
numQueries is a number that reports the total number of queries executed.

explain.numShards
numShards is a number that reports the total number of shards queried.

Lastly, some more profiling tips that can be pretty useful.

Logging Slow Queries

  • There are 3 levels for the profiler 0 – OFF , 1- Slow, 2 – ALL
  • Enable this using mongod — profile 1 –slowms 2
  • Db.getProfilingLevel() to get the current profiling level.
  • If you want to get all queries that took longer than 3 second, ordered by timestamp descending
  • db.system.profile.find({millis:{$gt:3000}}).sort({ts:-1})

Mongotop can be run set to a specific time interval to determine where is the majority of the time spent. This utility tracks the time that mongod spends on reads and writes. The information is on a per collection basis.
I prefer to increase the time for reporting from the default 1 sec to somewhere close to 5 sec or more (this is just a random number that I feel comfortable with..otherwise the data is to verbose to make any sense).

This should be a good start. We will go into some more concepts about monitoring mongodb before going into sharding etc..

MongoDB : Indexing

Indexing is one of the most important concepts for any database. Without indexes the mongod process would scan the entire collection and the all the documents it contains to obtain the result of the query. Indexes are defined for the collections and properties as well as sub-fields are supported. Briefly, MongoDB supports the following types of indexes :

  • Single Field Indexes : Think about having an index on a column in RDBMS.
  • Compound Indexes: Think about an having an index on multiple columns in RDBMS.
  • Multikey Indexes: This is unique to MongodB, it references an array and succeeds if there is a match for any value in the array.
  • Geospatial Indexes and Queries : Allows you to index GeoData. I really don’t know much about this to comment. MongoDB website is the best source.
  • Text Indexes : For full text search inside a document. Should we use lucene ? Not sure ?
  • Hashed Index: Index on hashed contents of the fields.

Indexes have properties associated with them :

  • TTL: This is a rather surprising feature. I never expected such a feature to be available on indexes, but after giving it some thought, it makes sense to expire the indexes.
  • Unique: Only documents with unique values on the field are permitted.
  • Sparse: Really useful if you are going to have sparse fields. It leaves out the documents that do not have the field.

Some more tid-bits
– The order of indexes matters.
e.g. if an index is created on (a,b,c) then index will be used only if the query is on
– A or a,b or a,b,c .
– Index will not be called if we query on b,c or c.
– It needs to be a left subset of the index.
– The command is below.
– Db.collection.ensureIndex(‘property’);
– e.g. db.students.ensureIndex({‘class’:1,’student_name’:1})
– Note: 1 or -1 is for ascending v/s descending which becomes useful when we have a lot of sort queries.
– By default all the indexes are built in the foreground i.e. all the writers will be blocked while the index is being created. It is fast but the database is blocked.
– Background ones are slow, fit for production use.
– Creating the database index in the background also blocks the current shell while it is being created.
– All indexes are Btree indexes.

Index Creation option , Unique , Removing Duplicates

– Unique indexes ensure that the key is unique in the collection.
– e.g. db.students.ensureIndex({‘student_id’:1,’class_id’:1},{‘unique’:true})
– To remove the duplicates while creating the index we can do the following :
– Provide the dropDups :true along with the unique attribute. This is dangerous, so handle with care. There is no way to control which documents it will remove. It will let live a single document and we can’t predict or configure as to which one it will be.

MultiKey Index
– It will create an index for all the items in the array if the key.
– You can’t have two multi keys in a single index. This cause a polynomial explosion of the indexes.
– We will only when we first insert something in the collection.

Index Efficiency
– $gt, $lt will use the index but the efficiency may not be there since the selectivity could be very low.
Similar for $ne etc..

Index Size
– Indexes must be kept in memory.
– Db.collection.stats() to get the stats on the collection.
Db.collection.getIndexSize() to get the total size of the index of the database.

Final thoughts
– Create any and all indexes that are required for your queries.
– Ensure that the indexes fit in memory, reading from the disk is bad.
– Sorting should also use Indexes.
– High selectivity should be the prime consideration when deciding about indexes.

MongoDB…Ops Stuff

In the previous two posts we have seen some basic querying and how to leverage the querying mechanism to get up and running. Now, we are off in the wild world and we also need to some more complicated stuff.

Creating Indexes
The api surface is really smooth with this, allowing us to specify the sort order of the indexes and the manner of building them foreground or background.

 public void CreateIndex()
{
var QuestionConnectionHandler = new MongoConnectionHandler<Question>("MongoDBDemo");
QuestionConnectionHandler.MongoCollection.EnsureIndex( 
                          IndexKeys.Ascending("Difficulty"), IndexOptions.SetBackground(true));
}

Dropping indexes is also easy with

QuestionConnectionHandler.MongoCollection.DropAllIndexes();

What happens when you want to see what is going on under the hood ? You let the database Explain it’s Plan.

QuestionConnectionHandler.MongoCollection.AsQueryable()
                  .Where(q => q.Difficulty >= 3).Explain();
//or if you went the other way 
var query = Query<Question>.GTE(q => q.Difficulty, 3);
var explainPlan = QuestionConnectionHandler.MongoCollection
                          .FindAs<Question>(query).Explain();

Now, if only we could have some stats about our database and the indexes. All wrapped in a nice syntax.

    var stats = QuestionConnectionHandler.MongoCollection.GetStats();
    Console.WriteLine("Namespace : {0}", stats.Namespace);
    Console.WriteLine("DataSize : {0}", stats.DataSize);
    Console.WriteLine("Index Count : {0}", stats.IndexCount);
    stats.IndexSizes.Keys.ForEach(Console.WriteLine);
    var size = QuestionConnectionHandler.MongoCollection.GetTotalDataSize();
    Console.WriteLine("The total datasize for this collection is {0}", size);

A routine task is to get all the collections in a database and all the databases on the server itself. Easy peasy!!

    var collections = QuestionConnectionHandler.MongoCollection.Database.GetCollectionNames();
    Console.WriteLine("\nThe following collections are present in the database");
    collections.ForEach(Console.WriteLine);
    var client = new MongoClient(@"mongodb://localhost");
    var server = client.GetServer();
    var databases = server.GetDatabaseNames().ToList();
    Console.WriteLine("\nAll the databases in the server");
    databases.ForEach(Console.WriteLine);

MongoDB C# Driver Part 3

In the previous post we have seen some simple queries. It is time we move onto something more concrete and realistic. There are basically two ways of querying MongoDB with the driver. First, as I showed last time is using LINQ. To use LINQ we need to first move into the Queryable world and then proceed with actual querying.
Be careful about pulling all the documents locally and then performing operations on them. What we really want to do is offload all our querying to MongoDB and then only use the results. The driver implements the IQueryable interface and hence we should use it.

	var result = UserConnectionHandler.MongoCollection.AsQueryable()
                                      .Where(u => u.Reputation > reputation);

The alternative to this form of querying is using a BsonDocument and a MongoQuery. The way to build up such a query is below. Note, that the first lambda is the property and the second parameter is the key for the filtering. The query builder is in the MongoDB.Drivers.Builders namespace.

var query = Query<User>.GT(u => u.Reputation, reputation);
var result = UserConnectionHandler.MongoCollection.FindAs<User>(query);

It is a bit more work to specify queries by hand so I would prefer LINQ, but both options are available.
Another, interesting querying mechanism is Regex. It was kind of hard to locate in the API ( or may be I just didn’t know where to look). It is present in Bson namesapce.

public void UserNameStartsWith(string searchKey)
{
var query = Query.Matches("Name", new BsonRegularExpression(string.Format("^{0}", searchKey)));
var result = UserConnectionHandler.MongoCollection.Find(query);
Console.WriteLine("We found {0} Users whose name starts with {1}", result.Count(), searchKey);
}

Select does not result in fewer fields being returned from the server. The entire document is pulled back and passed to the native Select method. Therefore, the projection is performed client side. We should use the IQueryable implementation from the MongoDB.Driver.Linq namespace. Alternatively, there is SetFields() that is available to selectively bring fields back from the database.

var query = Query.Matches("Name", new BsonRegularExpression(string.Format("^{0}", searchKey)));
var result = UserConnectionHandler.MongoCollection.Find(query)
     			.SetFields(Fields<User>.Include(u => u.Name, u => u.Reputation));

MongoDB C# Driver Part 2

Having done the inital work for talking to MongoDB we can now create some POCO classes and then do some querying on top of it. As usual, my model is Questions and Users.

public class Question : MongoEntity
    {
        public string Text { get; set; }
        public string Answer { get; set; }
        public DateTime CreatedOn { get; set; }
        public int Difficulty { get; set; }
    }
public class User : MongoEntity
    {
        public string Name { get; set; }
        public int Reputation { get; set; }
    }

Finally, we get down to some rela stuff.

public class SimpleQueries
    {
        protected readonly MongoConnectionHandler<User> UserConnectionHandler;
        protected readonly MongoConnectionHandler<Question> QuestionConnectionHandler;

        public SimpleQueries()
        {
            UserConnectionHandler = new MongoConnectionHandler<User>("MongoDBDemo");
            QuestionConnectionHandler = new MongoConnectionHandler<Question>("MongoDBDemo");
        }

        public void CreateQuestion(Question question)
        {
            //// Save the entity with safe mode (WriteConcern.Acknowledged)
            var result = QuestionConnectionHandler.MongoCollection.Save<Question>(question, 
                                 new MongoInsertOptions { WriteConcern = WriteConcern.Acknowledged});

            if (!result.Ok)
            {
                Console.WriteLine(result.LastErrorMessage);
            }
            else if (result.Response["err"] != null)
            {
                Console.WriteLine("Insertion was successfull");
            }
        }

        public void CreateUser(User user)
        {
            //// Save the entity with safe mode (WriteConcern.Acknowledged)
            var result = UserConnectionHandler.MongoCollection.Save<User>(user, 
                              new MongoInsertOptions { WriteConcern = WriteConcern.Acknowledged });

            if (!result.Ok)
            {
                Console.WriteLine(result.LastErrorMessage);
            }
            else if (result.Response["err"] != null)
            {
                Console.WriteLine("Insertion was successfull");
            }
        }

        public void GetAllQuestions()
        {
            var cursor = QuestionConnectionHandler.MongoCollection.AsQueryable();
            var resultSet = cursor.ToList();

            Console.WriteLine("Writing out all the questions");
            foreach (var result in resultSet)
            {
                Console.WriteLine("Text : {0},  Answer : {1}", result.Text, result.Answer);
            }
        }

        public ObjectId GetOneQuestion()
        {
            var cursor = QuestionConnectionHandler.MongoCollection.AsQueryable().FirstOrDefault();

            Console.WriteLine(cursor.Id);
            return cursor.Id;
        }

        public void DeleteQuestion(ObjectId id)
        {
            var result = QuestionConnectionHandler.MongoCollection.Remove(
                Query<Question>.EQ(e => e.Id, id), RemoveFlags.None, WriteConcern.Acknowledged);

            if (!result.Ok)
            {
                Console.WriteLine(result.ErrorMessage);
            }
            else
            {
                Console.WriteLine("Delete Operation OK : {0}", result.Ok);
            }
        }
    }

Now, that we have some capabilities in our application, we can query away.

//Seed Data
var question = new Question { Text = "Who are you ?", Answer = "I am MongoDB.",
                              CreatedOn = DateTime.Now, Difficulty = 3 };
var user = new User {Name = "Ashutosh", Reputation = 100};
var queries = new SimpleQueries();
queries.CreateQuestion(question);
queries.CreateUser(user);
var queries = new SimpleQueries();
queries.GetAllQuestions();
var id = queries.GetOneQuestion();
queries.DeleteQuestion(id);

If all is well then you will see some output and the world will be a better place.