What’s Cool in Azure — April 2020

Radosław Wiankowski
10 min readMay 13, 2020

April was the first full month that most of the world spent in lockdown caused by the COVID-19 outbreak. With numerous organisations from the IT industry shifting to a work-from-home mode, many of us already established some routine. It was, therefore, already clear that having to work and, in many cases, home-school kids while being constrained to the walls of our homes, meant reduced productivity.
With that in mind, I find it amazing that Microsoft can maintain a consistently quick pace when it comes to Azure. This month we had around seventy-five official updates, and this number seems to be on par with the average from the previous months. Naturally, we should expect delays for some significant features, but we still have plenty of news be excited about.

Updates to Managed Disks

In April, especially in the early days, Microsoft released several very welcome updates to Managed Disks. As much as we love to aim for serverless, most applications which run in the cloud today are based on Virtual Machines. All those features are now generally available.

Customer Managed Keys for Server-Side Encryption

We start with support for Customer Managed Keys for Server-Side Encryption (SSE). Azure Disk Encryption (ADE) already allowed customers to use SSE, however, only with platform-provided keys. With the latest update, customers can now bring their keys into a Key Vault (or generate new ones) and configure ADE to use those.

The update should be especially interesting for organisations operating in highly regulated industries and markets, which impose rigorous compliance requirements.

From the availability perspective, Customer Managed Keys are already available in the Azure Government and Azure Public clouds, with the Azure Germany and Azure China sovereign ones to follow soon. On the technical side, we can use the feature with Standard HDD, Standard SSD and Premium SSD Managed Disks.

Getting started is, however, still a bit complex, and requires operators to use the command line. You can find the details steps, as well as additional information, in the following blog post:

New sizes and performance improvements

We also got the new SKUs for Premium SSD disks — P1, P2 and P3, which represent the smallest drives in the family, ranging between 4GB and 16 GB. On top of that, all disks below P10 received a performance boost to 500 IOPS and 60 MB/s bandwidth, thus matching the Standard HDD specifications.

What is also exciting is that all Premium SSD disks below P20 now support bursting, allowing VMs to go beyond their prescribed IOPS and bandwidth limits for a brief time. Bursting is based on a credits system in which credits accumulate when the consumed performance is below the prescribed thresholds. When a machine accumulates sufficient credits, it can burst and provide additional IO or bandwidth.

What is nice about the system is that a Virtual Machine starts with a full stock of credits. This setup allows the VM to burst during boot, thus significantly shortening the start-up time. Then, when the machine accrues enough credits, it can use them to support workloads with spiky IO.

Bursting is enabled by default on all new disks and can be easily enabled for existing deployments. To find out how to do that, and read more on the topic, please head over to the official documentation:

Incremental Snapshots

Another exciting update is the availability of incremental snapshots. Although it seems to be aimed primarily at partners offering third-party backup or disaster recovery solutions, it can also be a welcome addition for organisations which frequently use snapshots.

As one of the oldest phrases in IT says, “snapshots do not replace a good backup solution”, but they do enable creating good backup solutions. They can also provide a quick rollback scenario for changes to production systems.

From a technical perspective, incremental snapshots are self-explanatory. Whenever the operator creates a subsequent snapshot, it only includes the delta of changes following the previous snapshot. Previously, any subsequent snapshot would include the entire contents of a disk. The new solution can, therefore, bring significant cost reduction.

If you’d like to learn more, please follow the blog post from Microsoft:

Direct vhd upload

Direct Upload is another simple yet powerful feature. Previously, to upload a virtual hard drive to Azure, customers had to use a Storage account or a Virtual Machine. Both methods worked but possessed natural disadvantages. Now, it is possible to upload a vhd file directly into Azure without any intermediaries.

The process is as simple as creating a new, empty Managed Disk and uploading the vhd file into that disk. It can be executed via the command line using AzCopy or via a graphical user interface with Azure Storage Explorer.

Direct Upload not only supports moving data from on-premises into Azure but also copying disks across Azure regions.

IPv6 is available

The adoption of IPv6 has been progressing very slowly over the last two decades. It is, however, as we all know, inventible, and already, t the time of writing, according to Google statistics, it surpassed 30% of web traffic.
With that in mind, it is about time that Azure Virtual Networks started supporting IPv6 under general availability, that is, in production deployments.

Using IPv6 with Azure VNETs is so simple that it might even be easy to miss it. Every virtual network supports dual-stack connectivity, which means that customers can have both IPv4 and IPv6 next to each other within a single VNET. In practice, we add another (v6) address space in the configuration of the network, next to the existing (v4) one. After that, we can configure an IPv6 address space for different subnets, and that is it, both Windows and Linux VMs will be able to use the new stack.

Adding an IPv6 address space to the VNET

The usual suspects, such as Network Security Groups, Load Balancers and DDoS protection already support IPv6. Support from Express Route, VPN Gateways and PaaS components is still on the roadmap, however.

IPv6 is generally available for, all Azure Public and Azure Government (US) regions. The documentation, however, doesn’t mention anything about the sovereign (Germany and China) clouds.

To learn more, follow the official sources:

Automatic instance repair for Virtual Machine Scale Sets

Virtual Machine Scale Sets (VMSS) are an example of an immensely powerful feature, which in my view is still massively underutilised. I suspect that there are numerous reasons for such a situation, but the complexity and up-front investments are the first ones that come to mind.

Whether we choose to create custom images or to deliver the configuration programmatically, we are potentially looking at a noticeable investment. Whichever way we go, we can expect the need for expertise and human resources.

As of this month, however, we have yet another reason to make that initial investment and adopt VM Scale Sets — the service now supports automatic instance repair.

When enabled, the feature monitors application health status emitted by VMSS nodes, and should a node report as unhealthy, it removes it and recreates it using the latest model.

Application health can be monitored using either one of the two options:

  • Application Health Extension
  • Load Balancer Health probes

It is crucial to keep in mind that only one of the health monitoring mechanisms can be enabled at a time. There are also several additional limitations and pre-requisites, with the requirement to use a single placement group being an example.

Using automatic instance repair for VM Scale Sets, does, therefore, require planning and proper configuration, so be sure to get well acquainted with the documentation before enabling it. Otherwise, you might experience unpredicted behaviour.

YAML Releases in Azure Pipelines

If you love Azure YAML Pipelines as much as we do, you’ve probably been using them for a while now, and not only using them for CI but also CD. We have been running deployments using the YAML version, and despite some missing features, we love it. If, however, you’ve been holding out, because you didn’t want to rely on a feature which was still in preview, we’ve got good news for you — as of this month, the CD features are generally available.

To use a deployment in a YAML pipeline, all you need to do is use “-deployment: name” instead of “-job: name” in the in “jobs:”. Then you specify and environment to deploy to and provide the steps under “strategy:”.

An example pipeline using deployments

The example shown above is, of course, just a taste of the possibilities. The full schema is provided along with a reasonable explanation in the official documentation:

What is extremely valuable to us when using Azure Pipelines YAML CD features are environments. Whenever you deploy to an environment, it gets defined in Azure DevOps and from that point on you’ll be able to track the history, set permissions, run checks, and require approvals.

Overview of environment in Azure DevOps

You can also link Azure Resources to the environment, however, currently, only Virtual Machines and Kubernetes namespaces are supported. App Service, as we’ve been told, is on the roadmap.

To learn more about the fantastic new features check out the official announcement from the team which develops Azure DevOps:

Use VM Scale Sets for Azure DevOps build agents

Another extremely exciting update for Azure DevOps is the possibility of using Virtual Machine Scale sets as build agents.

Azure DevOps provides all customers with Microsoft-hosted build agents with a free tier allowing a single concurrent job and eighteen hundred minutes of compute time per month. If this isn’t sufficient, customers can purchase additional concurrent jobs at attractive rates. Those cloud-hosted are great for many cases, but when we work with enterprises operating highly regulated industries, we in most cases want to opt-in for self-hosted build agents.

In such a case we become responsible not only for managing the hardware and software, but we must also cover the costs of Virtual Machines which host those agents. To manage those costs responsibly, but also provide the teams the compute power they needed to run their pipelines, we had come up with creative ways of orchestrating our farms of self-hosted agents.

With the new feature, we no longer have to worry out it though. Azure DevOps takes care of agent scaling for us. We still have to create the Virtual Machine images which include all software and tooling required by the teams, but now, instead of creating individual VMs, we can use it to deploy a Virtual Machine Scale Set. This scale set is then registered with Azure DevOps and that is it. The platform will take care of starting, scaling, and recycling instances in the set to meet the demand created by pipeline runs.

Settings for VM Scale Set hosted build agents in Azure DevOps

With just a few parameters we can configure the desired behaviour. What is nice about it, is that we can configure the number of standby agents to zero, which means that we will not be incurring any compute costs when no pipelines are running. The downside, in such a situation, is that we will be facing a wait time of around five minutes before an agent is provisioned in the scale set.

The feature is still in preview, but at the time of writing it should have already been rolled out to all customer rings. We’re already using it and we’re incredibly happy with the results.

To learn more, please head over to the official documentation:

--

--