Skip to main content

Why back-off mechanism are critical in cloud solutions

What is a back-off mechanism?
It's one of the most basic mechanism used for communication between two systems. The core idea of this mechanism is to decrease the frequency of requests send from system A to system B if there are no data or communication issues are detected.
There are multiple implementation of this system, but I'm 100% sure that you use it directly of indirectly. A retry mechanism that increase the time period is a back-off mechanism.

Why it is important in cloud solutions?
In contrast with a classical system, a cloud subscription will come at the end of the month with a bill details that will contain all the costs.
People discover often too late that there are many services where you pay also for each request (transaction) that you do on that service. For example, each request that is done to Azure Storage or to Azure Service Bus is billable. Of course the price is extremely low - Azure Service Bus costs is around 4 cents for 1.000.000 requests, but when you have a system that is not written in the right way, you'll end up adding additional costs.
I remember a few years ago, we didn't had a back-off mechanism for Azure Service Bus Queue, end even if we didn't had data we checked every 10ms. Guess what happen? At the end of the month we had around 30% of costs only from this kind of requests. Once we implemented the right back-off mechanism we reduce the cost of transaction to under 0.50$.

Where is applicable?
Don't forget that this topic is applicable not only when a cloud services is not reachable, but also for cases when you don't have data, but you make requests to often.

How we should implement it?
If you are a developer you might jump directly to whiteboard and start designing an algorithm that increase the time interval with a specific rate if there is no data or there are connection problems.
Before doing something likes this check what kind of protocol you are using and what libraries you are using.
Solved by Protocol
Nowadays, there are many communication protocols that keep an open connection between the two parties. It means that you will never specify a time interval and in the moment when there are data on the other end, your system is notified.
Solved by client libraries
All client libraries offered by Microsoft for Azure contains a retry policy mechanism, that can be used and extended with success. I seen people that were using back-off mechanism without knowing it - the client library was already using it with default values (smile).
I think that in most of the cases, the existing mechanism are enough to solve our core problems.

As we can see in the above example, even if we are increasing the time, there is a maximum threshold, that we can set based on our business needs (NFRs).

Should I ignore it?
No, you should never ignore it. But don't add extra complexity if you don't need it. Start and try a simple mechanism and based on your needs develop a more complex one.
If you need a custom back-off mechanism ask yourself why you are different from the rest of the consumers. You don't want to invest in something that you will not need or use at full capacity. It is just extra effort.

References
An extremlty useful resource for Azure clients is "Retry service specific guidance" -  https://docs.microsoft.com/en-us/azure/best-practices-retry-service-specific

 

Comments

  1. Indeed, reducing costs might be an advantage, however the backoff alghoritms are usually used to avoid congestion and contention in a system. Indeed, most of us are using it without knowing, since most of our servers are using the Ethernet protocol :)

    ReplyDelete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP