Skip to main content

DocumentDB (Day 5 of 31)

List of all posts from this series: http://vunvulearadu.blogspot.ro/2014/11/azure-blog-post-marathon-is-ready-to.html

Short Description 
DocumentDB is a NoSQL Document database offered as a service. It is full managed by Microsoft Azure and is extremely scalable and fast.
It gives us the possibility to store any kind of data. The stored data don’t need to have a specific format or to respect a model predefined. We can store in the same collection, data with different models and formats.

Main Features 
No Schema
Stored data don’t need to have a predefined data.
Scalable
A database can be spread across multiple machines. In this way we have the ability to scale up the computing power of our database.
Standard Capacity Unit (CU)
Each capacity unit comes with a specific storage capacity and throughput. Using CU we can scale up or down very easily. In the current version each CU gives us the fallowing throughput per second:

  • 2000 reads operations
  • 500 insert/update/delete operations
  • 1000 queries
  • 20 store procedures

Users and Permissions
To be able to control the access to our database in a granular way, we can define users and different permissions rules. For each database account we have the ability to define maximum 500.000 users and 2M permissions rules.
JSON Object Notation (JSON)
All content that is stored in DocumentDB is in JSON format. On top of this, all custom actions or code that we want to run on DocumentDB side is written in JavaScript. We will talk a little bit later about it.
Data Model
The model is based on documents and collection that are stored in JSON format. Each document is formed from a collection of key/value pairs. The type of elements can be string, integer, floating point or any other JSON type.
Index
In DocumentDB there is no need to define indexes. This database will define indexes over all the properties of a document.
Collections
A collection is represented by a set of documents that are grouped together. In the preview phase the maximum size of collection is 10GB, but we don’t have a limit of number of collections.
A collection can has documents with different data models. In the below example we have two different document in the same collection. In this way we can define collections that have document with different model. This can be useful when we store data that is similar but with small difference.
{
    {
        "id": "1",
        "name": "Radu Vunvulea",    
        "address": 
        { 
            "country": "Romania", 
            "city": "Cluj-napoca",
            "street" : "Plopilor"
        }
    }
    {
        "id": "2",
        "name": "Pop Iliescu",    
        "address": 
        { 
            "country": "Romania", 
            "city": "Timisoara",
            "street" : "Lopitau"
        }
    }
}
Transactions
There is transaction support, at collection level. This mean that we can execute transaction only at collection level. We cannot define transaction over multiple collection.
Access
We can access DocumentDB from multiple languages (C#, JavaScript, Python). It is important to remember that the core language is JavaScript and all the services are exposed as REST API. Having a REST API we can access the database from any client, even from browsers.
Below you can find the list of HTTP verbs:

  • GET – used to get resources
  • POST – create new resources (from documents to store procedures or triggers – we will talk about them in a few seconds)
  • PUT – updates (replace a document with a new document)
  • DELETE – remove existing resources

SQL Query
DocumentDB has support for a subset of SQL Query. We can use this query to execute different queries over a collection. The language is very simple and can be used with success by people that already know SQL.
SELECT {"name":person.name, "country": person.address.country} AS SimplePerson
FROM Persons person 
WHERE person.address.id > 10
The result is returned as a collection of documents or key/value pairs (depends on the result).
Stored Procedures
Yes, we have the ability to executed stored procedure on DocumentDB backend. Because we have CU units, we have a direct control on how much throughput power we have and we can increase it when needed.
Store procedures are defined in JavaScript. Once the store procedure were declared, a client can make a POST are request a specific store procedure to execute. Each store procedure that is executed will run in isolated environment.
We can execute with success when we have logic that is common and we want to execute it in a control and managed way. We can define our in store procedure a part of our domain logic.
Triggers
On top of stored procedure we can define triggers that will execute before or after specific commands. For example we can have a trigger that will run every time after an insert on a specific collection is done.
The language that is used for defining triggers is the same language that is used for Store Procedures – JavaScript.
For example triggers can be used with success to make validation of data before adding them We cannot change the data from a trigger in an insert command but we can reject an insert from a trigger.
User Defined Functions
Are very similar with store procedures but are used to extend the query language. We can define our own functions that can be used and access  by anybody. In this way we don’t need to define the same query over and over again.
Attached
You have the ability to 'attache' data to a document.
Consistency
All the time, we have 3 copies or our database in the same data center. When we scale up we don’t scale only the CU power but also we duplicate our database. This is needed because we don’t want our storage to become our bottleneck.
Because of storage replicated, problems can appear in the moment when we add a new document or we update an existing one. Below you can find 4 different consistency level that DocumentDB offers to us:

  • Eventual – High performance, but the client may read out of date data or we can see data in another execution order
  • Session – Client will read his own data correctly, but other clients may read his data in an out of date order or older data. The balance between performance and correctness is in a sweet spot with this configuration and can be used with success in many scenarios.
  • Bounded Staleness – More powerful than sessions because clients can see old data but they will see them in the order or execution them. Clients can specify how old data should be. 
  • Strong – All the time clients can see only consistent data. But because synchronization is expensive, all read and writes are slower.  

Remember that for each of them, we have a tradeoff between data correctness and performance. The beauty of this options is that each client can select the consistency level that is needed for their database.


Limitations 
Transactions cross collections
We cannot have a transaction on multiple collections. In NoSQL world this is something normal and it would be very expensive to have.
Replication on different data center
This features is not available in this moment. I see this feature very important for any storage type (from blobs to SQL and DocumentDB)
Automatic Scale (Elastic)
I would really like to see an elastic scale of CU based on needs.
No versioning support
There are a lot of use cases when versioning over Documents would be very useful.

Applicable Use Cases 
Below you can find 4 uses cases for DocumentDB:
Blog Application
DocumentDB can be used with success when we define a blog framework. We can define the posts, comments and list of users very easily. We would be able to let each user to define his custom properties and configuration (no-schema support). In this case we can configure with success the consistency level at session.
Muti-player games
A DocumentDB with consistency level configured at Bounded Staleness can be a good choice. Users will be able to retrieve data in the order of timeline.
e-Commerce – Products List
We can store the products list as document collections. It would be very easily to add custom characteristic to some products, versioning them and so on.
e-Commerce – User Card
The user card can be stored with success as a DocumentDB. Managing it can be very simple and with minimal costs.

Code Sample 
using (client = new DocumentClient(new Uri(endpoint), authKey))
{
    // Connect to a database
    Database database = new Database 
    { 
        Id = "radudb" 
    };
    database = await client.CreateDatabaseAsync(database);
 
    // Get document collection
    DocumentCollection collection = new DocumentCollection { Id = "Persons" };
    collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection);
    
    // Get persons with id bigger than 10.
    var query = client.CreateDocumentQuery(collection.SelfLink, "SELECT * FROM Persons person WHERE person.id > 10");
    var persons = query.AsEnumerable();
    foreach (var person in persons)
    {
        Console.WriteLine(sting.Format("Person name: {0}",person.name));
    }
    
    // Add a new person
    await client.CreateDocumentAsync(collection.SelfLink, @"{ 'id':3, 'name': 'Stefan Pop'}"); 
       
}

Pros and Cons 
Pros

  • Easy to Use
  • JavaScript support
  • JSON format
  • Triggers
  • Store Procedures
  • CU scaling (very smart)
  • Database size that can be very big
  • User Defined Functions
  • Multiple consistency level
  • Rich query over a schema-free 
  • Scalable storage and throughput 
  • Rapid development with familiar technologies 
  • Blazingly fast and write optimized database service

Cons 

  • No Elastic Scale support
  • No custom user management
  • No mechanism to fetch store procedures and triggers from a source control


Pricing
When we calculate the price we should take into account the fallowing components:

  • Capacity Units
  • Database Size
  • Storage
  • Outbound traffic


Conclusion
DocumentDB is very powerful NoSQL solution that can be used with success on many scenarios. It is very simple to use and scalable to TB of data. With atomic transactions per collection, triggers and store procedures can become very easily the best options for any developers that need a Document storage.
JavaScript, No Schema, JSON and simplicity convince me to use this database on projects.

Comments

  1. Replies
    1. What about it? We are talking about a service and not about you are using it. Once you have the access token you can do anything there. And developer is responsible to prevent this kind on injections.

      Delete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP