Review : C# In Depth

I have read the second edition as well. The third edition carries on from there and provides a deeper look in C# 5 and it’s key feature async/await. If you want to understand what goes on behind the scenes, this one is for you. Jon Skeet ( yes the famous guy Jon Skeet) has managed to come out with a great book once again.

Jon starts easy on this, but doesn’t hold back. The prose is lucid yet well paced. It is one of the few books that make the effort to take the reader on journey. The journey begins with C#1 and then continues to C#5. Hardly any chapters drag on anymore than they should. Linq is covered in great depth and is a nicely written with enough diagrams to visualize what is happening under the hood. Generics is one my personal favourites. The text is nice, so are the code samples. I have never really understood generics completely, but the books does make things a lot clearer. The treatment of dynamic is really nice. It goes into the heart of the DLR and shows everything that you need to know to really understand dynamic.

The book really shines when you move to async/await. Jon makes a rather tough concept easier. I had to read it several times but each time the concept became easier. I only wish we moved away from the download the web page example for async. The book does spend time on compiler transformations that are behind async/await. Be patient when you read it, it will take time to sink in.

The only part that has been left out from the third edition is the chapter on Code Contracts. Jon clearly mentions that the topic hasn’t gained as much traction as he hoped. It may come back in the future though. As many have mentioned this before this is not a book for beginners. Use this to become a better C# programmer, after you have written C# for some time. This is a must have in your collection, especially if you work with C# day in and day out. It will give you a greater understanding of how the language designers wanted you think, and what makes C# a real joy.
A 4.5/5 for this one, keeping the 0.5 for the next edition :).

P.S. Even the appendix is handy.

Disclosure : I got a free copy of the book to review. The review is my own opinion and not influenced by anyone else.

MongoDB- Understanding your queries through ExplainPlan

Understanding the queries we write is very critical and MongoDB does a good job here. Developers will find it easy to understand what the queries are doing and where to look for bottlenecks. Well defined parameters and also well documented ones make life a lot easier.
The details have been taken from the mongodb website and presented here for continuity of series.

Explain Output Fields

explain.cursor
cursor is a string that reports the type of cursor used by the query operation:

BasicCursor indicates a full collection scan.
BtreeCursor indicates that the query used an index. The cursor includes name of the index. When a query uses an index, the output of explain() includes indexBounds details.
GeoSearchCursor indicates that the query used a geospatial index.
explain.isMultiKey
isMultiKey is a boolean. When true, the query uses a multikey index, where one of the fields in the index holds an array.

explain.n
n is a number that reflects the number of documents that match the query selection criteria.

explain.nscannedObjects
Specifies the total number of documents scanned during the query. The nscannedObjects may be lower than nscanned, such as if the index covers a query. See indexOnly. Additionally, the nscannedObjects may be lower than nscanned in the case of multikey index on an array field with duplicate documents.

explain.nscanned
Specifies the total number of documents or index entries scanned during the database operation. You want n and nscanned to be close in value as possible. The nscanned value may be higher than the nscannedObjects value, such as if the index covers a query. See indexOnly.

explain.nscannedObjectsAllPlans
nscannedObjectsAllPlans is a number that reflects the total number of documents scanned for all query plans during the database operation.

explain.nscannedAllPlans
nscannedAllPlans is a number that reflects the total number of documents or index entries scanned for all query plans during the database operation.

explain.scanAndOrder
scanAndOrder is a boolean that is true when the query cannot use the index for returning sorted results.

When true, MongoDB must sort the documents after it retrieves them from either an index cursor or a cursor that scans the entire collection.

explain.indexOnly
indexOnly is a boolean value that returns true when the query is covered by the index indicated in the cursor field. When an index covers a query, MongoDB can both match the query conditions and return the results using only the index because:

all the fields in the query are part of that index, and
all the fields returned in the results set are in the same index.
explain.nYields
nYields is a number that reflects the number of times this query yielded the read lock to allow waiting writes execute.

explain.nChunkSkips
nChunkSkips is a number that reflects the number of documents skipped because of active chunk migrations in a sharded system. Typically this will be zero. A number greater than zero is ok, but indicates a little bit of inefficiency.

explain.millis
millis is a number that reflects the time in milliseconds to complete the query.

explain.indexBounds
indexBounds is a document that contains the lower and upper index key bounds. This field resembles one of the following:

“indexBounds” : {
“start” : { : , … },
“end” : { : , … }
},
“indexBounds” : { “” : [ [ , ] ],

}
explain.allPlans
allPlans is an array that holds the list of plans the query optimizer runs in order to select the index for the query. Displays only when the parameter to explain() is true or 1.

explain.oldPlan
oldPlan is a document value that contains the previous plan selected by the query optimizer for the query. Displays only when the parameter to explain() is true or 1.

explain.server
server is a string that reports the MongoDB server.

$or Query Output Fields
explain.clauses
clauses is an array that holds the Core Explain Output Fields information for each clause of the $or expression. clauses is only included when the clauses in the $or expression use indexes.

Sharded Collections Output Fields
explain.clusteredType
clusteredType is a string that reports the access pattern for shards. The value is:

ParallelSort, if the mongos queries shards in parallel.
SerialServer, if the mongos queries shards sequentially.
explain.shards
shards contains fields for each shard in the cluster accessed during the query. Each field holds the Core Explain Output Fields for that shard.

explain.millisShardTotal
millisShardTotal is a number that reports the total time in milliseconds for the query to run on the shards.

explain.millisShardAvg
millisShardAvg is a number that reports the average time in millisecond for the query to run on each shard.

explain.numQueries
numQueries is a number that reports the total number of queries executed.

explain.numShards
numShards is a number that reports the total number of shards queried.

Lastly, some more profiling tips that can be pretty useful.

Logging Slow Queries

  • There are 3 levels for the profiler 0 – OFF , 1- Slow, 2 – ALL
  • Enable this using mongod — profile 1 –slowms 2
  • Db.getProfilingLevel() to get the current profiling level.
  • If you want to get all queries that took longer than 3 second, ordered by timestamp descending
  • db.system.profile.find({millis:{$gt:3000}}).sort({ts:-1})

Mongotop can be run set to a specific time interval to determine where is the majority of the time spent. This utility tracks the time that mongod spends on reads and writes. The information is on a per collection basis.
I prefer to increase the time for reporting from the default 1 sec to somewhere close to 5 sec or more (this is just a random number that I feel comfortable with..otherwise the data is to verbose to make any sense).

This should be a good start. We will go into some more concepts about monitoring mongodb before going into sharding etc..

MongoDB : Indexing

Indexing is one of the most important concepts for any database. Without indexes the mongod process would scan the entire collection and the all the documents it contains to obtain the result of the query. Indexes are defined for the collections and properties as well as sub-fields are supported. Briefly, MongoDB supports the following types of indexes :

  • Single Field Indexes : Think about having an index on a column in RDBMS.
  • Compound Indexes: Think about an having an index on multiple columns in RDBMS.
  • Multikey Indexes: This is unique to MongodB, it references an array and succeeds if there is a match for any value in the array.
  • Geospatial Indexes and Queries : Allows you to index GeoData. I really don’t know much about this to comment. MongoDB website is the best source.
  • Text Indexes : For full text search inside a document. Should we use lucene ? Not sure ?
  • Hashed Index: Index on hashed contents of the fields.

Indexes have properties associated with them :

  • TTL: This is a rather surprising feature. I never expected such a feature to be available on indexes, but after giving it some thought, it makes sense to expire the indexes.
  • Unique: Only documents with unique values on the field are permitted.
  • Sparse: Really useful if you are going to have sparse fields. It leaves out the documents that do not have the field.

Some more tid-bits
– The order of indexes matters.
e.g. if an index is created on (a,b,c) then index will be used only if the query is on
– A or a,b or a,b,c .
– Index will not be called if we query on b,c or c.
– It needs to be a left subset of the index.
– The command is below.
– Db.collection.ensureIndex(‘property’);
– e.g. db.students.ensureIndex({‘class’:1,’student_name’:1})
– Note: 1 or -1 is for ascending v/s descending which becomes useful when we have a lot of sort queries.
– By default all the indexes are built in the foreground i.e. all the writers will be blocked while the index is being created. It is fast but the database is blocked.
– Background ones are slow, fit for production use.
– Creating the database index in the background also blocks the current shell while it is being created.
– All indexes are Btree indexes.

Index Creation option , Unique , Removing Duplicates

– Unique indexes ensure that the key is unique in the collection.
– e.g. db.students.ensureIndex({‘student_id’:1,’class_id’:1},{‘unique’:true})
– To remove the duplicates while creating the index we can do the following :
– Provide the dropDups :true along with the unique attribute. This is dangerous, so handle with care. There is no way to control which documents it will remove. It will let live a single document and we can’t predict or configure as to which one it will be.

MultiKey Index
– It will create an index for all the items in the array if the key.
– You can’t have two multi keys in a single index. This cause a polynomial explosion of the indexes.
– We will only when we first insert something in the collection.

Index Efficiency
– $gt, $lt will use the index but the efficiency may not be there since the selectivity could be very low.
Similar for $ne etc..

Index Size
– Indexes must be kept in memory.
– Db.collection.stats() to get the stats on the collection.
Db.collection.getIndexSize() to get the total size of the index of the database.

Final thoughts
– Create any and all indexes that are required for your queries.
– Ensure that the indexes fit in memory, reading from the disk is bad.
– Sorting should also use Indexes.
– High selectivity should be the prime consideration when deciding about indexes.

MongoDB…Ops Stuff

In the previous two posts we have seen some basic querying and how to leverage the querying mechanism to get up and running. Now, we are off in the wild world and we also need to some more complicated stuff.

Creating Indexes
The api surface is really smooth with this, allowing us to specify the sort order of the indexes and the manner of building them foreground or background.

 public void CreateIndex()
{
var QuestionConnectionHandler = new MongoConnectionHandler<Question>("MongoDBDemo");
QuestionConnectionHandler.MongoCollection.EnsureIndex( 
                          IndexKeys.Ascending("Difficulty"), IndexOptions.SetBackground(true));
}

Dropping indexes is also easy with

QuestionConnectionHandler.MongoCollection.DropAllIndexes();

What happens when you want to see what is going on under the hood ? You let the database Explain it’s Plan.

QuestionConnectionHandler.MongoCollection.AsQueryable()
                  .Where(q => q.Difficulty >= 3).Explain();
//or if you went the other way 
var query = Query<Question>.GTE(q => q.Difficulty, 3);
var explainPlan = QuestionConnectionHandler.MongoCollection
                          .FindAs<Question>(query).Explain();

Now, if only we could have some stats about our database and the indexes. All wrapped in a nice syntax.

    var stats = QuestionConnectionHandler.MongoCollection.GetStats();
    Console.WriteLine("Namespace : {0}", stats.Namespace);
    Console.WriteLine("DataSize : {0}", stats.DataSize);
    Console.WriteLine("Index Count : {0}", stats.IndexCount);
    stats.IndexSizes.Keys.ForEach(Console.WriteLine);
    var size = QuestionConnectionHandler.MongoCollection.GetTotalDataSize();
    Console.WriteLine("The total datasize for this collection is {0}", size);

A routine task is to get all the collections in a database and all the databases on the server itself. Easy peasy!!

    var collections = QuestionConnectionHandler.MongoCollection.Database.GetCollectionNames();
    Console.WriteLine("\nThe following collections are present in the database");
    collections.ForEach(Console.WriteLine);
    var client = new MongoClient(@"mongodb://localhost");
    var server = client.GetServer();
    var databases = server.GetDatabaseNames().ToList();
    Console.WriteLine("\nAll the databases in the server");
    databases.ForEach(Console.WriteLine);

MongoDB C# Driver Part 3

In the previous post we have seen some simple queries. It is time we move onto something more concrete and realistic. There are basically two ways of querying MongoDB with the driver. First, as I showed last time is using LINQ. To use LINQ we need to first move into the Queryable world and then proceed with actual querying.
Be careful about pulling all the documents locally and then performing operations on them. What we really want to do is offload all our querying to MongoDB and then only use the results. The driver implements the IQueryable interface and hence we should use it.

	var result = UserConnectionHandler.MongoCollection.AsQueryable()
                                      .Where(u => u.Reputation > reputation);

The alternative to this form of querying is using a BsonDocument and a MongoQuery. The way to build up such a query is below. Note, that the first lambda is the property and the second parameter is the key for the filtering. The query builder is in the MongoDB.Drivers.Builders namespace.

var query = Query<User>.GT(u => u.Reputation, reputation);
var result = UserConnectionHandler.MongoCollection.FindAs<User>(query);

It is a bit more work to specify queries by hand so I would prefer LINQ, but both options are available.
Another, interesting querying mechanism is Regex. It was kind of hard to locate in the API ( or may be I just didn’t know where to look). It is present in Bson namesapce.

public void UserNameStartsWith(string searchKey)
{
var query = Query.Matches("Name", new BsonRegularExpression(string.Format("^{0}", searchKey)));
var result = UserConnectionHandler.MongoCollection.Find(query);
Console.WriteLine("We found {0} Users whose name starts with {1}", result.Count(), searchKey);
}

Select does not result in fewer fields being returned from the server. The entire document is pulled back and passed to the native Select method. Therefore, the projection is performed client side. We should use the IQueryable implementation from the MongoDB.Driver.Linq namespace. Alternatively, there is SetFields() that is available to selectively bring fields back from the database.

var query = Query.Matches("Name", new BsonRegularExpression(string.Format("^{0}", searchKey)));
var result = UserConnectionHandler.MongoCollection.Find(query)
     			.SetFields(Fields<User>.Include(u => u.Name, u => u.Reputation));

MongoDB C# Driver Part 2

Having done the inital work for talking to MongoDB we can now create some POCO classes and then do some querying on top of it. As usual, my model is Questions and Users.

public class Question : MongoEntity
    {
        public string Text { get; set; }
        public string Answer { get; set; }
        public DateTime CreatedOn { get; set; }
        public int Difficulty { get; set; }
    }
public class User : MongoEntity
    {
        public string Name { get; set; }
        public int Reputation { get; set; }
    }

Finally, we get down to some rela stuff.

public class SimpleQueries
    {
        protected readonly MongoConnectionHandler<User> UserConnectionHandler;
        protected readonly MongoConnectionHandler<Question> QuestionConnectionHandler;

        public SimpleQueries()
        {
            UserConnectionHandler = new MongoConnectionHandler<User>("MongoDBDemo");
            QuestionConnectionHandler = new MongoConnectionHandler<Question>("MongoDBDemo");
        }

        public void CreateQuestion(Question question)
        {
            //// Save the entity with safe mode (WriteConcern.Acknowledged)
            var result = QuestionConnectionHandler.MongoCollection.Save<Question>(question, 
                                 new MongoInsertOptions { WriteConcern = WriteConcern.Acknowledged});

            if (!result.Ok)
            {
                Console.WriteLine(result.LastErrorMessage);
            }
            else if (result.Response["err"] != null)
            {
                Console.WriteLine("Insertion was successfull");
            }
        }

        public void CreateUser(User user)
        {
            //// Save the entity with safe mode (WriteConcern.Acknowledged)
            var result = UserConnectionHandler.MongoCollection.Save<User>(user, 
                              new MongoInsertOptions { WriteConcern = WriteConcern.Acknowledged });

            if (!result.Ok)
            {
                Console.WriteLine(result.LastErrorMessage);
            }
            else if (result.Response["err"] != null)
            {
                Console.WriteLine("Insertion was successfull");
            }
        }

        public void GetAllQuestions()
        {
            var cursor = QuestionConnectionHandler.MongoCollection.AsQueryable();
            var resultSet = cursor.ToList();

            Console.WriteLine("Writing out all the questions");
            foreach (var result in resultSet)
            {
                Console.WriteLine("Text : {0},  Answer : {1}", result.Text, result.Answer);
            }
        }

        public ObjectId GetOneQuestion()
        {
            var cursor = QuestionConnectionHandler.MongoCollection.AsQueryable().FirstOrDefault();

            Console.WriteLine(cursor.Id);
            return cursor.Id;
        }

        public void DeleteQuestion(ObjectId id)
        {
            var result = QuestionConnectionHandler.MongoCollection.Remove(
                Query<Question>.EQ(e => e.Id, id), RemoveFlags.None, WriteConcern.Acknowledged);

            if (!result.Ok)
            {
                Console.WriteLine(result.ErrorMessage);
            }
            else
            {
                Console.WriteLine("Delete Operation OK : {0}", result.Ok);
            }
        }
    }

Now, that we have some capabilities in our application, we can query away.

//Seed Data
var question = new Question { Text = "Who are you ?", Answer = "I am MongoDB.",
                              CreatedOn = DateTime.Now, Difficulty = 3 };
var user = new User {Name = "Ashutosh", Reputation = 100};
var queries = new SimpleQueries();
queries.CreateQuestion(question);
queries.CreateUser(user);
var queries = new SimpleQueries();
queries.GetAllQuestions();
var id = queries.GetOneQuestion();
queries.DeleteQuestion(id);

If all is well then you will see some output and the world will be a better place.

MongoDB..C# Driver Part 1

There are several drivers available for C#. I do not plan to go thorugh all of them here. Since, the official driver now has LINQ(although not complete yet) support, we will go with it.
Basic Setup..get the stuff of NuGet. It should put in two dll’s in there
1. MongoDB.Bson
2. MongoDB.Driver

We will get to what does what later. For now assume that we only want to get some data in and out of MongoDB.
Let’s connect to MongoDB now(the code below is just quick and dirty, we will see a better version later).

//MongoDB should be running by now, and assuming you have inserted some documents in there
var client = new MongoClient(@"mongodb://localhost");
var server = client.GetServer();
var database = server.GetDatabase("YourDataBaseName");
var mongoCollection = database.GetCollection("SomeCollectionName");
//Getting all the documents
var cursor = mongoCollection.AsQueryable();
cursor.ForEach(Console.WriteLine);	

It is time for some explanataion.

What is MongoClient ?
MongoClient is the standard way of accessing the driver. I have dabbled with the python driver(pymongo) and it is the same there. I believe it was changed to keep the drivers for different languages in sync.
Reading up on this told me that SafeMode settings were dropped in favour of WriteConcern and instead of SlaveOK , ReadPreference should be used. The settings were present previously in MongoServerSettings, the new ones are on MongoClientSettings. IpV6 setting is also in MongoClientSettings.

What is MongoServer ?
The server manages the life cycle of ServerProxies. Gives access to databases and some sort of connection management. More needs to be said about the server, I will stop short for now.

Notice that I did not use a genric GetCollection here (yet, will do so soon). The generic method is also available, which we will put to use soon.

All documents in MongoDB have an Id which has the type ObjectId (ObjectId resides in MongoDB.Bson).
So, we can have an interface which takes care of this and subsequently all our types can implement this.

public interface IMongoEntity
{
    ObjectId Id { get; set; }
}
public class MongoEntity : IMongoEntity
{
   public ObjectId Id { get; set; }
}

Refinement and obtaining a better MongoDBHandler(you can do much better than what I will show you here, but that depends to large extent on your taste).

 public class MongoConnectionHandler<T> where T : IMongoEntity
    {
        public MongoCollection<T> MongoCollection { get; private set; }
        private const string ConnectionString = @"mongodb://localhost";

        public MongoConnectionHandler(string databaseName)
        {
            var client = new MongoClient(ConnectionString);
            var server = client.GetServer();
            var database = server.GetDatabase(databaseName);
            MongoCollection = database.GetCollection<T>(typeof (T).Name.ToLower() + "s");
        }
    }

Having a the *databaseName* as a parameter is very subjective, if you only have a single database then you might want to just stick it directly in the method call, otherwise it just increases the burden of the caller and spreads the database name all over the code base. Another option that comes to my mind striaghtaway is having an Enum of DatabaseName or in some sort of configuration files. The same goes for ConnectionString, put it in some place configurable. The collection names are plural so you need to just stick an extra “s” in there(this took me quite a while to figure out).
So, we are setup nicely to go forward and do some more interesting work with MongoDB.