Skip to main content

Azure Redis Cache (Day1 of 31)

List of all posts from this series: http://vunvulearadu.blogspot.ro/2014/11/azure-blog-post-marathon-is-ready-to.html

Short Description
If you already used Redis Cache you should know that Microsoft Azure Redis Cache is based on this open source cache solution. This cache solution is offered as a service, with an SLA that reaches 99.9% uptime. Everything is managed by Azure infrastructure, the only thing that we need to do is to use it.
There are two editions of Azure Redis Cache in this moment. The basic edition is formed from only one node and can be used with success when we are in the development phase or we are working on a PoC. You don’t have any SLA on the base edition.
The standard edition is formed from 2 nodes (Master/Slave) and comes with a SLA of 99.9% uptime. On top of this, because we are having a multi-node configuration this edition is ready for high performance.
The maximum size of an instance is 53GB for each unit and we will talk more about this in the next sections.

Main Features
It is based on (key, value) pair store. Because of this it is very simple to use. On top of this, each operation is atomic and after each operation you are sure that the changes were persisted into the cache.
There are multiple data types that can be stored in Azure Redis Cache:

  • String – Maximum size is 512MB (per string)
  • Lists – This concept is pretty interesting and allows us to store data as a string and control where to push/pop content (at the beginning or at the end of the list). The list is ordered based on insertion time and can be very useful and play the role of a stack or queue in the same time.
  • Sets – A set of strings that allows us to add multiple times an element (it will not be added multiple times, but a check is not required before adding it). There is backend support for operations like union and intersection.
  • Hashes – Used to map different data objects
  • Sorted Sets – Similar with Sets, but each element has associated a score used to keep the sets in order. We have support to have multiple items with the same score
  • Bitmap and HyperLogLogs – strings based type with their own semantics

We have support for transactions. A transaction contains one or more atomic operations. Be aware that if a command in the transaction fails, there will be no rollback action. The rest of the commands will be executed and the error will be reported. This behavior happens because commands can fail only when there is a syntax error. Redis Cache is so simple as data structure and there are no real reasons to fail.

Redis Cache has the notion of Publisher/Subscriber. This allows us to have clients that send messages to a channel. Each subscriber will receive this message (a more simplified Service Bus).
There is full support of expiration keys. It allows us to specify how long a key should be kept in the cache. Keep in mind that the default behavior is to create keys that don’t expire. This means that the key will remain in the cache forever!
Each Azure Redis Cache access port is configurable. This means that you can specify what port and what kind of channel (HTTP/HTTPS) to use.
There is also full support of LRU that allow to delete automatically all data. I don’t want to go deeper on this subject, but what I would like to say is that all eviction policies from Redis are supported by Azure.
The last feature that I would like to present is the support for operations over cached items. For example we have support to increment a value from cache without having to get that value from it.

Limitations
The first limitation that people would say that exist is the cache size that can have maximum 53GB. Yes, we could see this as a limitation, but it is a good opportunity to split cache content on multiple cache units. For example users’ data can go into a cache unit, and products data can go to another cache unit and so on.
The second limitation is the maximum number of items that we can store into a Hash, List, Set or Sorted List. The maximum size is 4 billion. There are use cases when this value can be small and this can be a limitation for us. Of course from client side we can resolve this problem.

Applicable Use Cases
There are so many use cases that I have in mind. I will try to give 4 simple use cases when Redis Cache can become our best ally.

Top Product List
A top 100 most viewed products on a web sites. A use case like this can be implemented very easily using Sorted Sets where we would increment the score value of a product each time when someone buys one. We can use commands like ZRANK and ZRANGE to show only a part of the list for pagination or other marketing strategy.

Clients IPs
Using Sets we can track the IPs or all clients of our application and associate different information to this. We can even store the black list of the IPs in Redis.

Tracking system
A Redis lists can be used for each device to track the GPS position. You can set a maximum size of the list and push the new information all the time. In this way you will have the tracking history of devices for a specific period of time. Anytime you will be able to iterate it and analyze it.

Database Cache
Use Redis Cache as a cache layer over database and store information for a specific time period.

Code Sample
To be able to use Azure Redis Cache in C# you will need the NuGet packages related to it. Don’t forget that you can use Azure Redis Cache from any language and you have a REST API over it.
First of all you need to create a connection to Redis Cache. In that moment you need to specify the connection string (ConnectionMultiplexer). Once you have a connection instantiated you can get an instance to database and start adding/reading data from cache.
You should remember when a key doesn’t exist in the cache and you request it the NULL value will be returned.
// Get Connection instance
ConnectionMultiplexer connection = ConnectionMultiplexer.Connect("vunvulea.redis.cache.windows.net...");
// Get database
IDatabase databaseCache = connection.GetDatabase();
// Add items
databaseCache.StringSet("foo1", "1");
databaseCache.StringSet("foo2", "2");
// Add items with experation value
databaseCache.StringSet("foo3", "3", TimeSpan.FromMinutes(20));

// Get item value
string foo1Value = databaseCache.StringGet("foo1");

Pros and Cons
If you need a Cache solution, I would say that Azure Redis Cache is the best solution that is now on the market available on Microsoft Azure. The next list of pro and cons is applicable to Redis in general not only for Azure Redis Cache:

CONS:
  • Only a few different data types can exist by default, but there are enough for a cache system. You may have some problem if you want to map complex data types, but in this case you should use a document oriented solution like MongoDB.
  • Don’t try to store data in Redis Cache that has inheritance. It will be a nightmare and again this is not a use case for a (key,value) database solution and you should use MongoDB.

PROS:
  • Very fast even when you have billions of data cached 
  • Better scaling support
  • Command and smart data types like Sorted Lists
  • Persistence to disk
  • String size up to 512MB
  • Publisher and Subscriber support 


Conclusion
Yes and Yes. Azure Redis is the best caching solution that is now on the market as a service. It has a lot of great features and I’m happy that Microsoft Azure supports it.
In the last 4 years I had the opportunity to use all caching solutions offered by Microsoft Azure. Based on this experience I recommend to create a separate layer for caching and be prepared to replace a cache system with another if future will prepare to us better solutions. Don’t create monolithic systems.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP