Skip to main content

Posts

Showing posts from 2017

How to get your Azure Subscription Quotas and current Usage

In this post, we will talk about a simple problem inside Microsoft Azure:
How I can see what my quotas are for Azure Subscription that I am using?

Context
When you start to use you Azure Subscription more than for playing you realize after a few weeks that you do not know what your current quotas limits are. Additional to this, it is not easy to count the number of instances for each resource.
For example if you use multiple Azure Storage in different Resource Groups, how easily you can count the number of Storage accounts that you are using?

To make the problem a little more complex, you should know that a part of these quotas are per Azure Region. For example, you have a default limit of 50 VNETs per Azure Region. It means that it would be pretty hard to calculate the total number of VNETs that you are using on each Azure Region. It is not impossible, but you would need to do some additional things.

Usage + Quotas
Inside Azure Portal, we have a dedicated tile that provide this informat…

[Post Event] Codecamp Timisoara, October 14, 2017

In October 14 I attended to Codecamp Timisoara. It was a great experience, where I met great people from Timisoara and other cities around Romania.
At this conference I delivered a session about enterprise security and how you mitigate different security aspects when you want to migrate to Microsoft Azure. Below, you can find more information related to it.


Title: Enterprise security in Practice
Abstract: How does an enterprise looks like when you talk about security and cloud? Complicated, rigid and challenging to accept cloud based architectures. After working closely with security teams from different companies I identified different approaches and requirements that are standard for enterprises. In this session I want to discuss and share with you lessons learned on how we can map security requirements to Azure.
Slides:

Enterprise security in Practice from Radu Vunvulea

Migrating File Server (Share) to Azure

Let’s take a look on what are the current solution on Azure when we want to migrate our local file storage from on-premises to cloud.

Context
On the current system, let us imagine that we have Windows File Server that it is used to share filers inside the company. The File Server is fully integrated with our Active Directory server and based on roles we allow/restrict access to different folders/files.


What we want to do?
We want to migrate to Azure this solution in such a way that we don’t need to manage the File Server machines and also to be able to have control on file sharing permissions using user roles (Active Directory integration).
Addition to this, we want to be able to attach as a shared folder or partition the shared content on the client machine.

Azure Files
An extremely powerful solution provided by Microsoft, which allow us to store our files in Azure and share them with others. The SMB protocol is fully supported, meaning that we can attach the shared on our client machin…

The real difference between an Azure VM with or without SSD

I want to talk about the real difference of an Azure VM with or without SSD. This is not a post with charts and artificial benchmarks; it is just a real story from the field.
Context
One of my fellows from my work came at me complaining about a performance issue related to SQL Server. On an Azure VM with Linux they used to have an SQL Server instance. The DB storage size was not to complex and the DB size was acceptable.
Problem
Every few hours a job has to be executed on the database. There is a lot of data processing inside it and it usually takes around 1 hour. From duration perspective this is not acceptable, there is a clear NFR that request the task to be executed under 30 minutes.
An audit was done to the VM and database and it was pretty clear that there is a problem at read and write operations. Many actions were happening at that level, causing the memory and storage to be at high levels.
Actions
The DB specialists reviewed the database structure and the job. Unfortunately, the…

Using SQL security features to isolate sensitive data inside a PoC

When writing a PoC you need to keep it as simple as possible and prove that from a technology perspective the project vision is feasible and is the right one. One of the fundamentals rules of a PoC is that it needs to cover things that are not general truth (e.g. You don’t want to prove that ASP.NET MVC can render HTML or expose an REST API).
Keeping a PoC as simple as possible can become a problem when you want to use customer data not only mocks data. When you have customer sensitive information, which should not be visible even to the development team you might end up in a strange situation.
The problem is not related on how you can do this. The biggest problem is the effort that you need to invest to create the deployment scripts or the automation mechanism that would allow the customer to deploy the solution in an isolated environment, where development team would not have access. This effort might require extra development effort that you don’t want to include in a PoC.
It is cl…

Microsoft Ignite in a glance (Orladon, FL, 2017)

What a week! Last week of September was crazy from all the people that are working with Microsoft stack. The biggest Microsoft conference took place in Orlando, Florida. More than 25.000 people attended this year at Microsoft Ignite and as usually, it was an event will all the tickets sold out.


There were many announcement that makes Microsoft a strong player for today needs, but also there is a clear vision where they want to go. Not only this, but it seems that the road is already defined and clear.
The currents needs of the market are covered by Microsoft with Azure Stack, offering a good ground for hybrid solutions. Now we can use the core services of Microsoft Azure not only on Azure, but also on our on-premises infrastructure using Azure Stack. What is more interesting from devops and IT perspective is that you have the same experience, the same dashboard and you use the same scripts (no change is required).
Mixed reality and AI are now more closers to the fields. Many companies…

Why you should never put a junior in a team that is stressed and bad-mannered

It is nothing abnormal for a team to be stressed. During that times they might not have time for nothing else. The focus of the team is to deliver the business requirements in the IT solution. Unfortunately, this time periods can be longer than expected, even if we know that this is not good.
I saw teams being more than 18 months in a phase like this. After a while in this state, they don’t even realize that they are in this state and what is the impact and project and people level.
In this post, I will try to focus on the impact that such a phase can have at juniors and mid-levels in unhealthy teams.

Why?
When you are a junior you are in moment of you carrier when you want to learn. You know that you have things to learn, you are usually a fresh graduate with good theory knowledge and you want to put this in practice.
I like to compare smart juniors with birds that have big and powerful wings, but they don't know to fly very well yet. They can reach the sky and accomplish many thi…

Azure Blob Storage - More storage and throughput

One of the core services of Microsoft Azure is Azure Storage that is used to store binary content, key-values pair (Azure Tables)  or message queue (Azure Queues). In today's post, we will discover how a small change in Azure Storage capabilities is changing our life and simplify our IT solutions.

Current solutions
The current maximum capacity of Azure Blob Storage used to be 500TB. Even if this might sounds a lot, there are multiple cases when you had to overcome this limits. If you have a system where devices and users are uploading content to your system, than you can reach easily 2-5TB per day that would force you to use a new Azure Storage account every 3 months.
To overcome this limitation, your solution needs to be able to manage Azure Storage accounts automatically. Besides being able to clean and archive content automatically, you will need a system that can create a storage account on the fly and redirect traffic to it. When you use multiple Storage Account, you are force…

Less than 1 week until Microsoft Ignite 2017

First time when I took part at Microsoft TechEd in 2012 in Amsterdam. It was one of my first conferences with more than 5k attendees. It was a wow, from all perspective. From then I participate to each TechEd and Microsoft Ignite.
At Microsoft Ignite, attendees have the opportunity not only to learn and discover new stuff, but also to meet people all around the globe. It is that week in the year when you can meet face to face Program Managers from Microsoft together with people that you talk over Twitter from Japan, Australia, UK and USA in only one place.

This year things will be a little different. It will be the first time when I participate at Microsoft Ignite not as attendee, but also as speaker. It is a joy to be invited to speak at a conference with more than 23000 attendees. If this is not enough, I will have 3 sessions where I will share my knowledge and experience related to IoT, security and NoSQL. If you want to find more about this subjects feel free to join my sessions …

Is security and data privacy important on tracker devices like Fitbit?

A few days ago, I read about how insecure Fitbit devices are. There was a lot of noise created around it, explaining different ways how you can hack Fitbit device to gain access to personal data. My first reaction when I saw the title of article was “So what!?” and let me explain why I don’t see this a life treating or something that will stop me to use my Fitbit.

Personal data
It is true that a tracker contains personal data, but let us be realistic and look on what data it has. Most of the trackers contains information related to your past activity, heartbeat, number of steps and in some cases GPS information.

Except GPS information, the rest of the data are not so sensitive. What do you think that a hacker can do if he knows that you done 10k steps this morning. Yes, he might know your habits and broke into your house when you are jogging or walk the dog. This scenario can be real, but the true is that there are so many ways to find out what are your habits that you would be impress…

The scope of a PoC

Let us talk about what it should be the scope of a PoC and what you should or you should not have in a PoC.

Purpose of PoC
First, we need to define what is the purpose of a PoC is. The main purpose is to demonstrate the principles that are covered in technical documents (that it is not just theory and diagrams).

Reusability
It is already a deja vu for me to hear people that they want to reuse the PoC output in the main project. This happens because many times the PoC scope is too big and does not covers only the ideas that needs to be demonstrated.
When you have a PoC that covers more than 15% of the implementation effort than you might have a problem. That is not a PoC anymore, it is a PILOT, that represents a system with a limited functionality that go in production. The Pilot might have many restrictions, from NFRs to business use cases that are covered, but it has some part that works.
You will never want to invest in a PoC more than it is necessary and you shall always push the ou…

Containerization without a microservices approach

The current trends are clear. We should develop software applications using only microservice approach. This sounds goods for new application, where system requirements guides us to go with a microservice approach.
But what happens for the other types of systems. We might need to develop a normal web application, with some backend processing behind it. No crazy NFR, no need to scale to 100.000 RPS or similar stuff.

Monolithic application
As an example let us imagine that, we need to develop a web application that resize our pictures to Instagram size (1x1). There are no special requirements related to availability or scalability and the load on the system is a low. The system is used just by our employees (less than 5.000) for company images that needs to be published on commercial web sites.
Of course, we can imagine a state of the art microservice implementation, with different services that scale by themselves. What if we do not need something like this, but is very appealing for us…

List of IPs used by each Azure Resource (service)

It is not uncommon to configure the firewall and other security and control mechanism like User Defined Routes (UDR) and NGA (Network Security Groups) to restrict access to your Azure Resources. In the moment when we want to do such a thing we need to know the IPs that are used by Azure Infrastructure.

Let’s take as example a web application that is hosted inside App Service (using VNETs, Traffic Manager, Azure Storage, Azure SQL and many more). To be able to properly configure the access rules, we need to know what are the IPs used by Azure Storage and Azure SQL in that region, Traffic Manager IPs used for probing and so on.

Azure Region IP Range
Most of this information can be found in a XML provided by Microsoft (https://www.microsoft.com/en-us/download/details.aspx?id=41653), but I expect that this will not enought. You’ll find inside the document the IP ranges that are used by each Azure Region, but without a tag that specify what IP ranges are used by each Azure Resource it is to…

Is RDP connection open by default for VMs inside Azure?

I saw on Twitter a discussion related to Azure VMs and RDP connection that are open by default. The main purpose of this topic is to present different use cases when the RDP connection (not) is available by default.
Use Case 1: Single VM (VM with Public IP inside a default VNET) – RDP Active by default for public access In this context, we have a VM that is created from Azure Portal (or script) as a single entity. It is not part of any scale set or other type of custom configuration. It is just a simple Windows Server 2016 Datacenter machine, which is part of a default VNET with a Public IP allocated to it. In this case, by default the RDP will be configured. The default Network Security Group (NGS) that is created together with our VM will allow RDP connection to the machine. The default VNET allows RDP connection to our VM, because there are no custom NGS rules to restrict it and we have a Public IP attached to our VM.


Use Case 2: Single VM (VM without Public IP inside a default VNET…

Configure Traffic Manager and Web Apps over SSL (HTTPS) using custom domain

In this topic we will cover what we shall do when we:
Configure Azure Traffic Manager on top of Web Applications hosted inside App Services Over HTTPSWith custom domain Client certificatesContext
When we are using HTTPS in combination with App Services, everything will go smooth. You just need to activate the HTTPS and upload client certificate, if you want to use a custom one.
Things are a little different when you want to configure HTTPS on top of Traffic Manager. In theory, the steps are clear and it should work as expected, but combined this with custom domain and client certificates things can end up with a 404 error code..

Initial Setup
Pre requirements: Web Apps are configured over SSL using custom domain and works as expected.
Let’s take a look on the base steps that needs to be done when you want such a configuration

Create an instance of Traffic Manager inside Azure Portal and add your Web Apps that you already configured for HTTPSAdd your custom domain to your DNS Record – thi…

Azure Storage - Archive Tier | Perfect solution for store audit data

There are two new features of Azure Storage that will make the end of 2017 very interesting.

New Archiving Tier
A new archiving tier is now available for blob storage. Additional to Cool and Hot access tiers we have now Archive tier. In contrast to the existing ones, it was design for situations when you need to archive data for long periods.
An interesting fact, in comparison with Cool storage is related to SLA. The availability SLA is the same as for Cool storage – 99%, in the context of a tier that is it secure and durable as Cool storage, but it is much cost efficient. From what I see know is more than 5 times cheaper than Cool storage.

The new tier comes in hand-in-hand with the current trend of moving existing infrastructures to Azure. In many situations, there are cases when because of regulatory aspects, you need an archiving solutions for audit data.
Audit data needs to be stored for at least 5 years. With the current price of Hot and Cool tier, it was hard to do something lik…

.NET Core or .NET Framework together with Docker and Containers on Azure

In this post, I will try to find answers and guidance for technical people that needs to decide if they shall use .NET Core or .NET Framework inside Docker containers on Azure. The same answer is applicable for on-premises, because Docker runs in the same way on Azure or on-premises.

The directions from Microsoft related to this topic are clear. The default option shall be .NET Core. From this way, .NET Core is design in a such a way that is align with container concepts. For example the footprint was reduced drastically in comparison with .NET Framework.

One interesting fact that people do not know is related to the type of Windows image that you need to use when you use .NET Core or .NET Framework together with containers. When you use .NET Framework you need to use a Windows Server Core. This image is heavier than Windows Nano Server. This can have a direct impact at infrastructure and resources requirements. There are many things to say why Windows Nano Server is better, a few th…

Microsoft Azure MVP 2017-2018

Another year passed as Microsoft Azure MVP and I realize that this is the 6th year as Microsoft MVP. It is a pleasure to be part of MVP Community, that has extraordinary people that are ready to help and offer support to any community all around the world.

I’m honored and exited to be part of this great community for one more year!

IoT offer comparison: AWS vs Azure

Nowadays IoT is appealing for everyone. These opportunities made the two biggest cloud providers from the market (Amazon and Microsoft) to come up IoT platforms and solutions. The main purpose this this article is to compare the current solutions from features perspective and capabilities perspective.
The interesting thing that happened in the last few years is the way how IoT solution evolved. At the beginning the solutions were oriented around transport and communication, but now the IoT platforms evolved and are integrated with systems that runs on the edge and in the cloud, supporting the business needs.

X-Rays
Let’s take a look on the available solutions that Amazon and Microsoft offers. Both providers are offering a central Hub that is used to establish and facilitate a communication between devices and backend systems.
At device level, each provider is offering a set of package libraries that allows clients to integrate their devices with the communication platform faster. At ba…

Dynamic update of Azure Web Job time schedule

The topic of this post is simple:
How can I specify to an Azure Web Job the schedule using application configuration or another location?
Does the web job restart automatically in the moment when I change the time interval?

I decided to write about this, because even if you can find the solution on the internet, you need to invest some time on searching until you find the right documentation related to it (INameResolver).

Short version
A custom INameResolver is defined that based on a key name can read the configuration from any location. In the above example, the value is read from configuration file. Don’t forget to register the name resolver on JobHostConfiguration.
namespace WebJob.Schedule { class Program { static void Main(string[] args) { JobHostConfiguration config = new JobHostConfiguration(); config.NameResolver = new MagicResolver(); config.UseTimers(); JobHost host = new JobHost(config); host.RunAndBlock(); } private class …

Azure Data Lake and Replication mechanism cross Azure Regions

Context
Let’s imagine that we are working in an automotive company that collects telemetric data from their cars. A part of this data needs to be processed, stored and managed.
To be able to store data now and use it later on in time, you decided to go with Azure Data Lake, that is not limited on how much data you can store and allows you to plug any kind of processing system.
Requirements
After the architecture audit, because of legal constraints you are required to have a resiliency policy for disaster recovery. Even if in the same Azure Region there are 3 copies of data that are generated by Azure Data Lake, the legal constraints require you to have a resiliency policy.
Problem
Azure Data Lake makes 3 copies of data in the same Azure Region, but there is no support to replicated or backup content in a different Azure Region. You will need to define you own mechanism for this.

Available Solutions
We could fine many ways of doing this. There are 2-3 mechanism to do replication of Azure …

[Post-Event Event] Event Sourcing and CQRS | ITCamp Community Summer Lunch Event in Cluj

Today we had the first ITCamp Community Event during the lunch break. We decided to do this event at this time in day because it was the only available slot for our special guest Andrea Saltarello.
The talk was about CQRS and Event Sourcing and even if it was only for one hour, the session contained a lot of takeaways, not only from technical perspective, but also from costs and architecture point of view. A great comparison between different NoSQL and ESB systems was presented from Event Sourcing point of view.

There were almost 30 people that decided to transform their lunch to a geek lunch together with ITCamp Community. This event was possible with the support or our local sponsors, that made this event possible.


Below you can find pictures from the event. See you next time!