Skip to main content

Azure Development Environment - Event Hub and Throughput Units

Context
Very often, the cloud services that are cheap are the one that will be the most expensive one at the end of the month. Not because there are hidden costs, but because people tend to use them without caring of costs - they say that are cheap and the price is low.
Making a review to the last bill of the development subscription I notified that 25% of the costs of the bill are generated by Service Bus - more exactly by Event Hub.

Event Hub
Why this is happening? Each developer is using one or two instances of Event Hub to develop, test and validate different flows. Our scripts, that are generating instances of Event Hub will crease a new namespace for each new instance of Event Hub.
In this context, for a team of 7 developers, you will end up with 14 different Event Hubs and Service Bus Namespaces. On top of this, you will have different environments, like DEV and TEST environment that will add another 3 or 4 instance of Event Hubs.
In the end you can end up with 18 different Event Hubs, each instance will have his own namespace and Throughput Unit.

The cost of a Throughput Unit (TU) per month is around 22$ for the Standard tier.
18TU x 22$ = 396$/mo
Nice, almost 400$, only for TU.

Keep in mind
Before describing each solution that could help us optimize the cost, we need to remember that if you implement a solution, it should have a low impact on development team. This means that you don't want to make the DEV life a hell, because you decided to reduce some costs.
If might cost you more the time that DEV team lose by sharing the same Event Hub or implement another solution that optimize costs than the TH cost itself.


Possible Solution
Share
It might be possible for each DEV to use only one Event Hub and have multiple Consumer Groups. This solution is not only complex, but will not be 1 to 1 to the real environment and can be buggy.
I would try to avoid as much as possible a solution like this.
Use Basic Tier
The cost difference between Standard and Basic tier is 50% per TU (Basic is 11$/mo, Standard is 22$/mo). The problem with the Basic tier is from the functionality that is available, like Publisher Policies or the number of Consumer Groups.
The one that might create the biggest problem is the limited number of Consumer Groups. For nasic tier you can have only one, the default one - $Default. This default Consumer Group cannot be renamed or deleted. If you need a custom name of the Consumer Group you will not be able to use the Basic tier.
Because we are using custom name for Consumer Groups, we cannot use this solution without changing the deployment and creation script.
Clean the Event Hub every day
This solution would involve that each day, to delete all the DEV instances of Event Hub and in the morning to recreate it. The price of TU is per hour so you might be able to reduce the cost with maximum 50%.
From my past experience I would say that you will be able to reduce the cost only with 33%. Why? Because people might come at 8AM, others might leave at 8PM. So you will need around 16 hours when the resources is up and running.
From the DEV perspective, you will need a script to clean and recreate resources and also to push the access configuration to their environments.  This might be a hell, creating a to complex and buggy DEV env. that will cost you more and will not be loved by the DEV team.
I wouldn't try to implement such a solution.

Conclusion
Even if we identified different solutions that might help us to reduce the cost of the bill, we need to decide if worth it or not to optimize the cost.
Personally, I wouldn't do any change related to this and I would prefer to pay more on Azure bill, but have a simple and clear DEV environment, that is easy to use.



Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP