MongoDB : Sharding

Sharding

Mongodb sharding is based on shard key.

K1 -> k2 on shard1
K2 -> k3 on shard2 etc..

Each shard is then replicated for higher availability and DR etc..Sharding is therefore range based. Sharding is done per collections basis.Range based sharding helps it do range based queries.

All of the documents on a particular shard are known as chunks ~~ 100mb.

There are two operations that happen in the background in sharding and these are done automatically for us.

  1.  Split – splits a range if the range is producing bigger chunks, this is fairly expensive
  2. Migrate – moves chunks to somewhere else in the cluster, this is somewhat expensive.
    • Between a pair of shards there will not be more than one migration activity.
    • We can still read and write from the data when we are migrating. So, it is live.
    • “Balancer” decides when to do the balancing. It balances on the number of chunks today.

Both of these are done to maintain a balance in the shards w.r.t. the documents.
The metadata about these shards and our system is stored in config servers. These are light weight.
Conceptually these shards are processes and not separate physical machines or virtual machines although they can and most likely will be.

Mongos gives the client the big picture of the whole setup. Client is therefore insulated from the underlying architecture that is used to implement sharding, replication etc..

ShardingSharding

End client applications should go through mongos.

To create a shard connect to mongos

Then

sh.addShard("hostname:port")
sh.enableSharding("dbname")
db.ensureIndex(Key pattern for your shard key)
sh.shardCollection("namespaceforyourCollection",shardkey);

Choosing a shard key

– Filed should be involved in most of the queries
– Good cardinality/granularity
– Shard key should not increase monotonically

MongoDB – Replication

Replication

Replication helps us achieve availability and fault-tolerance. A replica set is a set of mongo nodes that replicate data amongst each other asynchronously. One of the replica sets is primary while the rest of them will be secondary.
Writes only happen to the primary. If the primary goes down then an election happens and the new primary comes up.
Minimum number of nodes will be 3, since the election requires a majority of the original set.
If there were only 2 sets then the remaining one is not a majority and you would not be able to write.

Replica Set Elections

Regular Node : It has the data and can become primary or secondary.
Arbiter : It is just there for voting purposes. We need it if we want an even number of nodes.
It has no data on it.
Delayed/Regular : It can be set to a few hours after the nodes. It cannot become primary. It’s priority is set to 0.
Hidden Node : It cannot become a primary node. It’s priority is set to 0.

By default the reads and writes go to the primary. You can go to secondary for reading. This means that you might read stale data. The lag between nodes is not guaranteed since the process is async. If you read from secondary then what we have is “eventual consistency”.

rs.slaveOk() -- ok to read from the secondary
rs.isMaster() -- checks whether the node is master
rs.status() -- gives the current status of the replica set.

In the database locals , the collection oplogs.rs has all the operational logs. Replication happens when secondary nodes query the primary for the changes from a given timestamp. OpLog is the statement based replication log.
Replication happens in a statement driven manner?
e.g If a statement deletes 100 documents on the primary then there will 100 statements that are sent to the secondary to execute. There is no binary replication. This allows us to run different version of mongodb on different machines.

  • Try to keep the OpLog small on 64 bit machine since it defaults to a large value on 64 bit systems.
  • For replica sets don’t use localhost or the ip address of the machine.
  • Use a logical name, that is the best practice.
  • Use DNS.
  • Pick appropriate TTL as well.

MongoDB : Indexing

Indexing is one of the most important concepts for any database. Without indexes the mongod process would scan the entire collection and the all the documents it contains to obtain the result of the query. Indexes are defined for the collections and properties as well as sub-fields are supported. Briefly, MongoDB supports the following types of indexes :

  • Single Field Indexes : Think about having an index on a column in RDBMS.
  • Compound Indexes: Think about an having an index on multiple columns in RDBMS.
  • Multikey Indexes: This is unique to MongodB, it references an array and succeeds if there is a match for any value in the array.
  • Geospatial Indexes and Queries : Allows you to index GeoData. I really don’t know much about this to comment. MongoDB website is the best source.
  • Text Indexes : For full text search inside a document. Should we use lucene ? Not sure ?
  • Hashed Index: Index on hashed contents of the fields.

Indexes have properties associated with them :

  • TTL: This is a rather surprising feature. I never expected such a feature to be available on indexes, but after giving it some thought, it makes sense to expire the indexes.
  • Unique: Only documents with unique values on the field are permitted.
  • Sparse: Really useful if you are going to have sparse fields. It leaves out the documents that do not have the field.

Some more tid-bits
– The order of indexes matters.
e.g. if an index is created on (a,b,c) then index will be used only if the query is on
– A or a,b or a,b,c .
– Index will not be called if we query on b,c or c.
– It needs to be a left subset of the index.
– The command is below.
– Db.collection.ensureIndex(‘property’);
– e.g. db.students.ensureIndex({‘class’:1,’student_name’:1})
– Note: 1 or -1 is for ascending v/s descending which becomes useful when we have a lot of sort queries.
– By default all the indexes are built in the foreground i.e. all the writers will be blocked while the index is being created. It is fast but the database is blocked.
– Background ones are slow, fit for production use.
– Creating the database index in the background also blocks the current shell while it is being created.
– All indexes are Btree indexes.

Index Creation option , Unique , Removing Duplicates

– Unique indexes ensure that the key is unique in the collection.
– e.g. db.students.ensureIndex({‘student_id’:1,’class_id’:1},{‘unique’:true})
– To remove the duplicates while creating the index we can do the following :
– Provide the dropDups :true along with the unique attribute. This is dangerous, so handle with care. There is no way to control which documents it will remove. It will let live a single document and we can’t predict or configure as to which one it will be.

MultiKey Index
– It will create an index for all the items in the array if the key.
– You can’t have two multi keys in a single index. This cause a polynomial explosion of the indexes.
– We will only when we first insert something in the collection.

Index Efficiency
– $gt, $lt will use the index but the efficiency may not be there since the selectivity could be very low.
Similar for $ne etc..

Index Size
– Indexes must be kept in memory.
– Db.collection.stats() to get the stats on the collection.
Db.collection.getIndexSize() to get the total size of the index of the database.

Final thoughts
– Create any and all indexes that are required for your queries.
– Ensure that the indexes fit in memory, reading from the disk is bad.
– Sorting should also use Indexes.
– High selectivity should be the prime consideration when deciding about indexes.

MongoDB ..Let’s get it started

MongoDB is a document database and I won’t bore you with details about what it can do. MongoDB makes many claims, they are
here. In short, MongoDB is a document database that stores data in a JSON format and then promises to make out lives easier. Download Installation instruction are varied depending on your OS etc..and the both varieties of bitness are also available.

The shell for MongoDB is a Javascript based one, so if you know a bit of JS then you will fly in there. Start mongod from the command line, this can be done by navigating to the bin directory of installation. Provide the dbpath(the place where it stores the data) as well.

MongoD

If you look at the output when we start the daemon (mongod = mongo-daemon) then the last line talks about a rest interface. This can be enable by passing the –rest flag when starting the instance. The port for viewing the webpage is also there.

Let’s get into the mix of things by connecting to this mongod using the shell(mongo.exe). Navigate to the mongo directory and go mongo.exe.
By default this connects to the test database. The prompt will say something like

C:\Users\Ashutosh\mongodb\bin>mongo.exe
MongoDB shell version: 2.4.1
connecting to: test
> 

At any moment you can go back to mongod and check how many connections are open. Run a couple of commands like
show dbs
show collections

to see if everything is in order.

Create a new database by use yourdatabasename. The shell will switch it’s context to that database(after creating it if it does not exist already.)
Next, time we will do some stuff by creating some collections and then querying. This will be a longish series, hold on and let me smear you twitter timelines.