Skip to main content

(Part 1) Azure Service Fabric - Parent Child communication and cancellation

Part 1 - http://vunvulearadu.blogspot.ro/2016/03/azure-service-fabric-parent-child.html
Part 2 - http://vunvulearadu.blogspot.ro/2016/03/part-2-azure-service-fabric-parent.html

A few weeks ago I so an interesting question on MSDN forum that I think that is pretty common. In this post I will try to give a possible solution to this problem.
Context:
There are multiple instances of the same Restful Stateful Services that are running in parallel. A new instance is created by a 'parent service that also might specifies the action that needs to be executed.
Problem:
The parent service needs to be able to cancel the instance of our Restful Stateless Service based on external factors or based on the current state of the child.


What we need
Basically we need to:

  • Store and map all the service instances that are created by a service
  • Store in a specific location a state related to them
  • Share their state with the parent service
  • Give the ability to the parent to cancel a child service
Sharing the state
To be able to share the state between different instances of the same service we can use Reliable Collections. In our case Reliable Dictionary might be a good solution for our problem. We can store as a key value pair the state of each service instance. 
Each instance has the ability to update his state directly. Any concurrency problem like temporary inconsistency are resolved out of the box by Reliable Dictionary.

Cancellation
When we create an instance of a Restful Stateful Service we can use RunAsync method. This method allow us to specify a cancellation token. This token can be used by the child service to see if a cancellation was requests.
protected override async Task RunAsync(
      CancellationToken cancellationToken)
{
    ...
}

Unique identification of each child service
In our Reliable Dictionary we need to identify unique each service. We could generate a unique ID for each service instance. It might work, but we would need to send this ID to each service instance in the moment when we would call RunAsync method.
Another possible solution is to use the CancellationToken that we already have as a key. The instance of cancellation token is known by parent and child  we can use it easily as the key.


Using this approach we can have a mechanism that allow us to have a simple and cheap communication between our services. The main flow would look like this:

  1. [Parent] Create a CancellationToken
  2. [Parent] Add the CancellationToken to our Mapping State dictionary 
  3. [Parent] Start a new instance of your stateless service and give us parameter the CancellationToken
  4. [Service Instance] Do his logic
  5. [Service Instance] Add specific information to the Mapping State dictionary to the item with the same CancellationToken
  6. [Parent] Detect that a condition is TRUE
  7. [Parent] Get the CancellationToken of the service instance that needs to be canceled
  8. [Parent] Send the CancellationToken to the service instance
  9. [Service Instance] Detect the cancellation request and stop


This solution can be very useful when we need to migrate a heavy services from a monolithic architecture to a solution hosted on a micro-service system.
The only problem with this solution is related to the type of service. We cannot use multiple types of Reliable Services for this because a Reliable Collection can be shared and accessed only by the same service type.

Tomorrow we will see another solution for this problem without having to use Reliable Collection and the same Reliable Service.

Part 1 - http://vunvulearadu.blogspot.ro/2016/03/azure-service-fabric-parent-child.html
Part 2 - http://vunvulearadu.blogspot.ro/2016/03/part-2-azure-service-fabric-parent.html

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP