Skip to main content

Posts

Showing posts from 2015

[Post Event] Winter ITCamp event in Cluj-Napoca, Romania, December 9, 2015

Last week we had in Cluj-Napoca last ITCamp event in Cluj-Napoca for this year - Winter Code Camp. For this events we decided to talk about the new version of ASP.NET and MVC.
Because it was an afternoon event, we decided to have only two sessions where the following subjects were touched:
What new in ASP.NET MVC, 5 Visual Studio 2015 and Web ToolingSetting up a SPA using Aurelia with Typescript in Visual Studio   More information about agenda and original announcement can be found here: http://vunvulearadu.blogspot.ro/2015/11/winter-itcamp-event-in-cluj-napoca.html
From the number of participants, we were impressed. There were more than 120 attendees. We had to open another room for overflow. First time in my life when had this situation and to use an overflow room. It was a big WOW from our side and we promise that we will organized another event in first 3 months of 2016. At the end of this post, I shared with you some pictures from events.   Thank you for participating at this eve…

Visual Studio Online - Cannot add a new account (401, 403, TF400813)

In this post we will see what we need to do when we want to add a new user on our Visual Studio Online project, but we end up with errors like
401403TF400813 Let's start from the assumption that we are the administrator on VS Online. We added a user a few days ago on our project, with the same rights like other users. Even if we added the user, he is not able to access our project. It is well known that sometimes we need to wait a few hours until the rights and changes are propagated, but it never takes more than 12-24h. In this moment we need to ask yourself what is different. There are times when the change propagation freeze and you need to remove and add the user again. We sad that this might be possible and let's remove the user and add him again to VS Online and to our project. After removing the user we observed that we cannot find anymore the given account, even if it was a LIVE account. This was strange and odd. We remembered that a few days ago we had a similar issue…

Visual Studio Online - Administrators without rights: "AAD guest users are not allowed to search in AAD"

Playing with Visual Studio Online is sometime like playing with fire. It works great 99.99% of time, but from time to time you have a problem and you cannot find easily support.
In this post we will talk about what you need to do
when you want to add a new user and you end up with 401 or 403 error code.
This days we wanted to add a new user to our Visual Studio Online project. In the portal, when we wanted to add the new user we ended up with the following error:
"AAD guest users are not allowed to search in AAD" This happens when we want to add a new user to a group. Even if we were administrators, we were not able to do any change. This was something new for us, 2 weeks ago everything was fine.
What happen in the last period of time. Our company integrated our own AD with Azure AD. To be able to use your administrator's rights on Visual Studio Online you need to be added to Azure AD (even if you have a LIVE account) as Member. By default you will be Guest.
PowerShell C…

In-Role Cache and Azure Managed Cache Services will be retired next year

A few days ago Azure team announced that they will retire Azure Managed Cache and In-Role Cache will not be supported anymore. In this post we will take a look on why this is happening and what could this decision affect us.
The official announced can be found on the following link: https://azure.microsoft.com/en-us/blog/azure-managed-cache-and-in-role-cache-services-to-be-retired-on-11-30-2016/

Azure Managed Cache Services is a service that allow us to use a cache solution as a service. From our application, we would only need a 'connection string' to the cache services. Using this information we could store or retrieve any kind of content from cache.
In-Role Cache allows us to cache content in the memory of our role (web/worker role). A part of the memory can be configured to be used for cache. The data that is cached in In-Role Cache is synchronized automatically between multiple instances of our role.
This two cache solutions offered by Microsoft Azure are used very often…

[Post Event] CloudBrew 2015, Belgium

This week I was invited by Belgium Azure group (Azug.be) to CloudBrew conference. This was the
second time when I was invited as speaker at this conference. Like the first time, I had a great time, meeting great people and rediscovering what beer tasting means.
CloudBrew is that kind of conference where you discover a great and powerful Azure community, that is very active and up to date with all trends and new stuff. 
At this event I had the opportunity to about IoT and how you can create a solution that can manage 1 million messages per second. At the end of this post you can find my slides and a few picture from the event.
Title: How to manage one million messages per second using Azure
Abstract: 
At the beginning of a project it is simple to promise to clients different things, but when you need to prove them you might have discover that is impossible. Living in the IoT era we need to be able to process large amounts of content per second. This is why in this session we will see ho…

Winter ITCamp event in Cluj-Napoca, Romania, December 9, 2015

Registration link: itcamp-iarna-2015.eventbrite.com
 În  luna Decembrie, ITCamp (fostul Codecamp) organizează un nou eveniment pentru profesioniștii IT din Cluj-Napoca. Evenimentul urmează să aibă loc pe data de 09 decembrie,la sediul companiei Thomsons - clădirea Olimpia Business Center (strada Dorobantilor, 98-100).
Tema acestui eveniment este "Web Programing - ASP.NET MVC (5)", doua teme sunt pregătite pentru acest eveniment: "What new in ASP.NET MVC, 5 Visual Studio 2015 and Web Tooling" și "Setting up a SPA using Aurelia with Typescript in Visual Studio".
Participarea la eveniment este gratuită. Mulțumim sponsorilor pentru sustinere (Thomsonspentru locație și SDL pentru gustare). Agendă: 18:00-18:30 Attendee Registration (Cofee Time)
18:30-19:30 What new in ASP.NET MVC, 5 Visual Studio 2015 and Web Tooling Radu Vunvulea Do you like challenges? Then the next 60 minutes will be a challenge for all of us.  We will take a look on the new features of ASP.NET MVC 5, …

[Post Event] ITDays, Cluj-Napoca, November 24-25, 2015

Today, I had the great opportunity to be invited as speaker at ITDays. This conference is at the 3rd edition and it looks like that every year ITDays is becoming bigger and bigger.
ITDays in figures:

3+ international speakers 3+ local product launches 4+ hands on lab 5+ research projects 20+ technical presentations 300+ participants My session was about event messaging system and how you can manage a system that needs to handle thousands of messages per second. Slides and abstract of the session can be found below. Title:  How to manage one million messages per second using Azure Abstract:  At the beginning of a project it is simple to promise to clients different things, but when you need to prove them you might have discover that is impossible. Living in the IoT era we need to be able to process large amounts of content per second. This is why in this session we will see how we can construct a solution around Azure that can handle very easy 1M messages per second. We will start the ses…

[Post Event] ISTA Conference 2015, November 18-19, Sofia Bulgaria

This week I had the great opportunity to be one of the speakers of ISTA Conference. This was the 5th edition of ISTA Conference and it seems that every year ISTA Conference becomes not only bigger and bigger but also the quality of the session is higher and higher.
This year there were more than 30 speakers from 3 different continents, 2 keynotes, more than 26 sessions, 4 tracks in parallel and a full day of workshops. On top of this all tickets were sold out. Great job!

From my point of view, it was one of the best organized conferences. All the events went very smoothly, all the time internet connection was working, you were able to find enough water and coffee and during the lunch break there was enough food for everyone. This small thing makes a big difference. On top of this, the local speakers presented all their sessions in English, in the end all speakers were able to join and understand all the sessions.

At this conference I talked about cloud and how we should design a syste…

Transferring content from Azure Storage using a secure channel - Aspera On Demand

Storing data in a cloud provider like Microsoft Azure or AWS is trivial. If you have an application that is running on a cloud provider you will start to generate content that is stored there.
Many times this content is private and you need a secure solution to transfer it to different locations around the glob.
One solution for this problem could be Azure CDNs, but in this moment only HTTP protocol is supported. This means that you will need to encrypt the content before sending it on the wire. This might be possible, but if need to transfer 1 TB of data, the encryption and decryption will take some time and will consume resources (especially CPU).
A better solution might be Aspera. Using Aspera services, you will get a transport platform that offer a secure channel (encrypted) to transfer data from one location to another. All the things that you normally need to take into account like bandwidth, security layer over HTTP and so on are handled by Aspera. The communication channel off…

Logging on external storages.... lesson learns

Logging and audit is a must have to for all applications. Without this information, monitoring and support team would not be able to know what is happening in the system, if the system works correctly and what happens in a specific point in time.
On top of this, from a security perspective, you need to audit at different levels of your system who is accessing your system, what is the action and when.

There are many solutions out of the box on the market that help us to do logging and audit in our system. I suppose that all of us used at least one time in their life log4net or NLog. There are situations when you need to persistent logs in storage that are not on the same machine where your system runs. For example, a common use case can be writing all this information to:

SQL instanceAzure Blob StorageAzure Event Hub
But, did you ask yourself what is happening when this storage cannot be reached. This post will cover this case, what if … the storage where I persist logs and audit cannot…

What should I do when on Azure Web Role I get "HTTP Error 401.2 - Unauthorized. You are not authorized to view this page due to invalid authentication headers"

This post is dedicated to a simple but annoying issue that can appear when you are developing a ASP.NET MVC Application.
When I run my web application on my local machine, directly from Visual Studio, my web sites works perfectly. When I deploy it on a Web Role I get "HTTP Error 401.2 - Unauthorized. You are not authorized to view this page due to invalid authentication headers" The same problem can appears if you deploy on on-premises servers, but in general this problem pop-ups on Web Roles. Not because a web role is different (it has the same layers as a web server -OS and IIS), but because people that are working with cloud solutions forget about infrastructure layers.

If we are reading the error message we can identify some possible root causes. But basically it seems that the user that is trying to access the page is not authorized.
But why it is working on the development machine from Visual Studio? Simple, in general the developer is also the admin on the machine and…

What should I do if I suspect that one of Azure Services is not available or there is a quality degradation?

This post will be short and I will try to response to the following question:
What should I do if I suspect that one of Azure Services is not available or there is a quality degradation?Checklist:

1. Check Azure Status (https://azure.microsoft.com/en-us/status

You will be able to find in real time information related to interruption and service degradation at world wide level. All the issues and problem are reported on this page. In this way you can know if there are (known) issues on Azure.
2. Check Azure Portal for Alerts The new portal will notified you about different issues that can appear in the system. For example when you reach a limitation of your service (throughput) you will be notified about it (before reaching it). Don't forget that you can redirect this alerts on your email.

3. Check your service instance status on Azure Portal  At this step you want to be sure that your service status is green and the configuration is good. I had situations when somebody reset the acc…

ISTA Con 2015, November 18-19 - Sofia

Next month I am invited to participate as a speaker at ISTA Con 2015. This conference takes place in Sofia, Bulgaria. It seems that each year the conference is getting bigger and bigger. Last year there were more than 600 people that participated at the event.

A big difference between ISTA Con and other conferences is the number of tracks in parallel - there are 4 track in parallel and more than 30 speakers from 3 continents will speak.

 one. 
I'll talk about cloud and how to scale above clouds limits. We are living in era of cloud (Azure, Google, AWS) where scalability is not a real problem. Scalability may not be a real problem anymore. For having a good cloud solution you need more than 1.000 VMs and an ESB (Enterprise Service Bus). You need a healthy and reliable architecture.
See you at ISTA Con 2015, that will take place in Sofia (November 18-19, 2015).  
My session: Title: How to scale above clouds limits  Abstract: The number of devices that are online increases every day. …

Re-awarded as Microsoft MVP (4th year)

I'm happy that I was re-awarded as Microsoft Most Valuable Professional (MVP). This is my 4th year in a row as MVP on Microsoft Azure. I'm exited that I can sustain the local communities and online one with valuable content and information. I'm proud to be part of Microsoft MVP program.

This year was a special one.I received the news exactly when we were in a load test with out IoT platform having more than 1900 CPU Cores allocated on Azure and trying to handle 100.000 of smart devices from life-science industry (for us a device is not a phone, is a real laboratory from an hospital). The devices were simulated using AWS infrastructure (more than 20k VMs that hit our platform - 250.000 requests/s)
I will talk more about this in the future posts, but for new I will take a short brake, recovering after the load tests.

What is MVP award?
This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We ap…

How you can deactivate session affinity on Azure Web App and ARR

In today's post we will talk about REST API and stick sessions.

Nowadays, REST API and Web Services are very common. We can find a lot of system and devices worldwide that are using HTTP(s) protocol and REST Services to communicate.
It not very common to send a cookie or session information when you calling a REST endpoint. All information are send in the heather, but not like cookies or session data.

In this situation you may want to disable sending cookies and session information to the clients. In this way you could use a little less resources on backend. Especially when you have a system that will need to manage hundreds of thousands of messages per second.
This two things can be manage and disabled pretty easily.

Going further, let's analyze what is happening when you host an endpoint like this on Azure like a Web App. It is important to know that in front of a Web App on Microsoft Azure we have out of the box a Load Balancer when multiple instances are deployed for the s…

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment.

Context
If we take a look on Scalability Targets of Azure Storage (https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits.
If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB.
Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached.

How we can detect this problem?
From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP request.
Th…

ListBlobsSegmented - perfect to iterate in large containers - Azure Storage Blobs

In this pot we will talk about 'ListBlobsSegmented' command, that allow us to get blobs from a container.
When this command is useful?
'ListBlobsSegmented' is used when we need to fetch the list of blobs that are under a container of Azure Storage. This command will not fetch the content of blobs, only the metadata of blob will be fetched. Based on this information, if needed we can trigger the download.

An important thing is related to the number of blobs that will be fetched when we make a call. The number of blobs that will be retrieved from a call is maximum 5.000 blobs metadata. If the container has more than 5.000 items, the response will contain also a BlobContinuationToken.
This token can be used to fetch the next 5.000 blobs from the container. The size of the result cannot be changed. We cannot modify this value.

Example:
BlobResultSegment blobResultSegment = blobContainer.ListBlobsSegmented(new BlobContinuationToken()); while (blobResultSegment.ContinuationTo…

What to do when I receive 502 error code on an Azure endpoint - HTTP Request failed. Error Code: 502.

From time to time we started to receive from Azure Web 502 HTTP error code during some load tests. In this post we will talk a little about what can be the root cause of this error and how we can manage it.
HTTP client exception: HTTP Request failed. Error Code: 502. This error does not appears very often, but can be annoying. Especially because the root cause of this error cannot be traced easily.
When you have an Azure WebApp (Azure WebSite) or an Azure Web Role this error is not returned by your application. This error is returned in most cases by Azure Load Balancer that plays the role of ARR (Application Request Routing).
When ARR doesn't receive a response from your application in 3 minutes (default timeout for Azure WebApp), a 502 error is returned. For ARR this means that the system is not in a good health, it could be even in a Pending state. The 3 minutes timeout period is specific for Azure Web App (Azure Web Sites)
Solutions
First think that you need to do is to look i…

App Service Plan - Why is important for Azure Apps

In this post we will talk about App Service Plan that exists for Azure App Service. The main scope of this post is not to cover all the details, but to put on the table the small things that can make a difference.

Do we have a service plan for Web/Worker Roles?
No, App Service Plan exists only for Azure App Services like Web Apps, API Apps, Logical Apps, Mobile Apps and so on.

Why when I increase the number of instances of a specific Web App I increase automatically the size of the rest of the Web Apps from the same service plan?
All resources are shared between all the applications from the same App Service Plan. This means that when you increase the number of instances, you will see this change on all Apps from the same App Service Plan.

When I use the same App Service Plan does multiple apps share the same physical resources?
Yes. All Azure Apps under the same App Service Plan are using the same resources. For example if you have 3 Web Apps under the same App Service Plan, all of them…

Why not to use StopWatch when you need to measure the duration of an HTTP request in WebAPI

In this post we will talk about how we can measure how long it takes for a HTTP request to be executed on an ASP.NET MVC application.
All the tests are done using a web site hosted on Microsoft Azure. The instance used for this purpose is Shared - F1.

Let's assume that we have the following requirement:
At the end of each HTTP request you need to add to the logs information related to request duration. The first solution that could come into our mind is to use "HttpContext.Current.Timestamp" to calculate the duration of a request. In theory we could calculate the difference between "DateTime.Now" and timestamp from "HttpContext".

protected void Application_EndRequest() { Trace.WriteLine(string.Format("Request duration: {0}", (DateTime.Now - HttpContext.Current.Timestamp).TotalMilliseconds)); }
As we can see in the above example, we added this logic in the "Global.asax" file, in the "Application_EndRequest" me…

Task.Unwrap() - A useful proxy to avoid inner Task inside a Task

In this post we will talk about Task and what we should do when we end up with 'Task<Task<Foo>>".

Let's start with a simple example. Let's asume that we have an async method.
public async Task<int> DoSomethingAsync() { return await GetNumberAsync(); } We have the 'DoSomethingAsync' method that we need to call inside another task. If we call this method directly we will end up with Task<int>, but if we call this method in another Task than we will end up with...
Task<int> simpleCall = DoSomethingAsync(); Task<Task<int>> complexCall = new Task<Task<int>>( async () => { return await DoSomethingAsync(); }); As we can see, to be able to call a async a method in a task we will need to add the 'async' attribute to the lambda expression (function). Because of this we will get a Task of Task (Task<Task<..>) and not a simple Task<...>.

You could say that this is fine…

Azure Service Bus Premium - Better latency and throughput with predictability

A few days ago the public preview of Azure Service Bus Premium was announced. The first two things that people usually check at a new service like this is price and the added value. In this post we will talk about them.

Added Value
The biggest things between the standard offer and Premium is the dedicated VMs that are reserved only for the end customer. Each VM that is used by Azure Service Bus Premium is isolated from the rest of the customers and will be used only by him.
This means that the performance of the Azure Service Bus will be predictable. This is a very important quality attribute, especially when you need to know exactly how long it takes for a message to arrive at final destination, when latency and throughput needs to be predicted.
When you decide to use the Premium offer, the storage engine that will used behind the scene will not be the standard one, used by Service Bus, but the new one, used by Azure Event Hub - so called Jet Stream.
In this way, we can have dedicated…

Azure Storage - Client Side Encryption

A few days ago, client-side encryption for Azure Storage was announced. In this post we will take a look over this feature.

First of all, you should know that the encryption/decryption takes place on client side. This means, that the content will be already encryption when it will arrive on Azure. This encryption technique is called Envelop Technique. It is very useful when you want to add another security layer over your data.
Out of the box, there are client library for .NET (including Windows Phone). For other languages, like Java is not yet supported, but because the encryption algorithm is a well know one, you may be able to implement it on other platforms also.

The encryption algorithm that is used by client library is AES (Advanced Encryption Standard). It is important to know that the encryption keys are generated by the client library. And is NEVER stored in Azure Storage. The encryption key should be stored in a different location. This library is full integration with Key V…

Azure Service Bus - How to extend the lock of a message | RenewLock

In this post we will discuss about Azure Service Bus Topics and Queues, with a special focus on Peek and Lock feature.

Introduction
Azure Service Bus is a messaging system that allows us to send messages between different systems in a reliable and easy way. A lot of concepts from ESB are implemented by Service Bus, allowing us to do do magic stuff with messages.
There are two ways to consume messages from Service Bus
Peek and Lock - locks a message for a specific time internal and notify Service Bus when we want to mark the message as processed (removed from Service Bus)
Receive and Delete  - once a message is received from Service Bus, it is also deleted automatically from the messaging system

Peek and Lock
When using Peek and Lock, by default we lock the message for 60 seconds. This means that in this time interval the message is not available/visible for other consumers. Once we process the message we can mark it as processed.
If we don't mark the message as processed or something…