Skip to main content

Azure Policy - An excellent tool for resource governance inside Azure

The number of Azure Regions increases every day. Just a few days ago two new locations were announced. All the things that are now happing related to the cloud are not only crazy but also scarry.

Scarry from the government and legal perspective. For example inside an Azure Subscription, a user can create storages in any location around the globe. What happens if you based in the UK, you have customer information that is not allowed to leave the country.
You might say that you would train the team that has the rights to create new resources only in Azure Regions that are based in the UK. This solution is not enough because from government perspective you don't have a mechanism that would enforce this.

To enforce something like this, Microsoft Azure is offering Azure Policies. This service is allowing us to define a specific list of rules and actions that are applying automatically to resources that are created under particular Azure Subscription or Azure Resource Group (or even Management Group).

For the above example, you can use one of the default policies that are already defined inside Azure Policies - "Allowed Location". This policy enables us to restrict the locations where users can create new resources. In this way, we can control the locations where resources are created. The users would be allowed to create new resources only in the UK.

Additional to the location policy, some other predefined policies are beneficial like:

  • "Not allowed resource types" - This policy restrict users to create specific resources (Eg.: users cannot create Azure SQL Databases).
  • "Allowed Resource Type" - Allow users to create only specific resources that are defined in the policy
  • "Enforce tag and its value" & "Apply tag and its default value" - These two policies enable us to enforce users to specify specific tags to resources and set particular values in certain conditions.


Another essential feature of policies is so-called "Effect". It represents the action that takes place when a specific policy applies to a resource. There are five effects that you can use:

  1. Deny - Fails the request and generate an audit log
  2. Audit - Accept the request and create an audit log
  3. Append - Add additional fields to the request (Eg. add additional tags)
  4. AuditIfNotExist - When the resource does not exist, enable auditing
  5. DeployIfNotExist - When the resource does not exist, create that specific resource (Eg. each Web App shall have an Application Insight created).


You can read more about Azure Policies on the official documentation (https://docs.microsoft.com/en-us/azure/azure-policy/). But before starting to use them keep in mind the following recommendations:

  • Apply policies at the highest level possible. This policy can be assigned at the next level without any issues.
  • You should reuse policies as much as possible. Features like parameters allow us to reuse policies definition as much as possible. 
  • Don't restrict access from second 1. Try to do a small audit and only after that to define policies that deny specific actions. 
  • For exising environments, start with Audit and only after that Deny specific actions. 

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP