Skip to main content

Encrypt binary content stored on Azure Storage supported out of the box - Azure Storage Service Encryption for Data at Rest

Small things make the real difference between a good product and a great one. In this post we will talk about why Azure Storage Service Encryption for Data at Rest is so important.

Context
When you decide to store confidential data in cloud or in an external data source means that there is a trust between you and the storage provider (in our case Microsoft Azure). Many time this is not enough. There are also laws that force you to encrypt the content from the moment when binary content leaves your network until reach the next destination or will be access again.
In this context, even if you trust Microsoft enough or any other cloud provider, this will not be enough. And any paper or certification will not be enough in front of the laws. For industries like banking or health this kind of scenarios are common and migration to cloud is hard and even impossible in some situations

Encryption Layers
When we discuss about encryption and security of data, there are multiple layers where we need to provide encryption. In general, when we define application that move data or access it from client secure environment we need to provide security at:

  • Transport Layer
  • Storage Layer

If we have a system that transfer data from location A to location B, then we might have:

  • Transport Layer
  • Transit Layer

But in the end, Storage and Transit Layer at the same thing. In one case we persistent content for long time, in comparison with the other one when we store data only for a specific time interval.

Transport Layer
At Transport Layer, Azure Storage supports HTTPs, that automatically secure us our transport Layer. Unfortunately, we are not allowed to use our own certificates, only Microsoft certificates are allowed. In most cases, this is enough and is not a blocker.
Also, for normal consumers, cloud provider certificates are safer than custom client certificates.

Storage Layer
What we had until now
Until now, we didn’t have any mechanism to encryption content at storage layer. The only mechanism available until now was to encrypt content before sending in on the wire. This solution works great when you have enough CPU power of you don’t need to encryption to much traffic. Otherwise client encryption is expensive and requires the user to manage by himself the encryption keys. This is an out of the box features offered by Azure SDK. Client libraries allows us to encrypt content based on our own keys. To secure and control who has access to this keys we can use Azure Key Vault. This mechanism is great, but we are the one that need to manage everything and the encryption is done on the client side.

What we have starting from now
Starting from now we, Microsoft Azure allows us to do this encryption at REST endpoint directly using Azure Storage Service Encryption for Data at Rest.
Long name, but the idea is very simple. All the content that will be stored in Azure Storage will be encrypted automatically by Azure, before storing it on the disk. In the moment when we want to access the information, the content will be decrypted before sending in back to as.

All this activities are transparent to the user. The client doesn't need to do something special for it. Once the encryption is activated per Azure Storage Account from the Azure Portal, all content that is written from that moment will be encrypted.

Encryption Algorithm
The encryption algorithm used by Azure Storage in this moment in time is AES 256 (Advanced Encryption Standard with the key length of 256 bits).
This is a well known standard, that is accepted and used by Governments and other companies around the word. This standard includes ISO/IEC 18033-3 standard, being safe enough to be used in most industries.

Encryption Key Management
The key management is done fully by Microsoft. Clients are not allowed to come with their own keys.

Content Replication
If you have activated the geo-replication feature, all the content that is written in the main region and in the geo-replicas will be encrypted.

What we can encrypt
In this moment in time we can encrypt any kind of content that is stored on Blobs (Block Blobs, Append Blobs and Page Blobs), VHDs and OS disks.
There is no way to encrypt content that is stored on Azure Tables, Files or Azure Queues.

What happens for content that already exists on the Azure Storage
You are allowed to activate this feature any time once you create an Azure Storage. Once you activate this feature, all the content that is written after this moment will be encrypted. Existing content will remain in 'clear text'.
If you need to encrypt content that was already written in your storage, you need to read and write it again. Tools like AzCopy can be used with success in this scenarios.
A similar thing happens when you disable encryption. From that moment all the content will be written in 'plain text'. Existing content will remain encrypted until the next write.

Azure Storage Account Types
Only the new storage accounts support encryption - ARM. The Azure Storage Accounts that were created using in the classical format (Classic Storage Account) doesn't support encryption.

Price
There is no additional fee for this service.

How to activate this feature
This feature can be activated from Azure Portal or using PowerShell.


Conclusion
This is a great feature that can be used with success to offer end-to-end encryption of data, from the moment when data leaves your premises until you get it back. You can get end-to-end encryption without extra costs of custom implementation.
Only by activating this feature and using HTTPs you have this out of the box. Cool!

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP