What’s Cool in Azure — Summer 2020

Radosław Wiankowski
9 min readSep 17, 2020

With the holiday season upon us, the pace of updates slowed down. Also, given that even the most dutiful authors of monthly update compilation need to have some time off, I’ve decided to, temporarily, step away from the monthly cadence of blogging. Not to mention that with beautiful, sunny weather, many of us prefer to spend our time doing something else than reading about cloud computing.

As a result, I am happy to present this compilation of important announcements from the Azure ecosystem, that covers the timeframe between June and August.

Changes to Microsoft Ignite

We’ve already known for a while, that similarly to the Build conference, this year’s Ignite event will be an online, digital-only experience. Sadly, no trip to New Orleans this year. Recently, however, Microsoft shared additional details.
Instead of the usual four-day conference, we’re going to get two events, each being a 48-hour continuous stream of sessions. The first one is scheduled to take place between September 22nd and 24th. The second one is intended for early 2021, but we don’t know exact dates yet.

If we were planning an on-site, in-person event, I would have preferred going through the travel arrangements only once. But given that we’ll be able to enjoy Ignite from the comfort of our homes, I firmly believe that the split was a great decision. Not only is it easier to fit a two-day event into the calendar, but we will also get another chance to learn about the latest and greatest in Azure. Isn’t that amazing?

Following:

Customer initiated failover for Azure Storage Accounts are now generally available

Geo-redundancy is a way of providing high availability and durability for data stored in Azure Storage Accounts, which has been available to us for many years now. When you select the (RA)-GRS or (RA)-GZRS redundancy option, data is asynchronously copied from the primary region when the account is deployed, to the secondary region.

Azure regions are grouped into geographies, and Microsoft aims to have at least two of them, hundreds of miles apart from each other, in any geography. This way, should a catastrophic event render a region offline, the workload can failover to the other one. Until now, however, it was Microsoft that determined when the failover should occur, and they have always taken this decision very seriously.

The reason for such a situation is the asynchronous nature of the copy operation, which replicates data to the secondary location. In case of an outage, there is the possibility that not all data written to the primary region was copied over to the secondary one. As a result, during the failover, that data would be lost.

Microsoft’s approach always prioritised data durability over service availability. That view, however, wasn’t shared by many of its customers.

Some organisations can tolerate data loss to restore service availability as quickly as possible. Now, those customers can decide on their own when a failover should occur, and can manually initiate the action.

When the action is triggered the DNS records for all Storage Account services (blob, files, table, queue) are updated to point at the secondary replica instead of the primary one. The secondary effectively becomes the new primary and is configured as a locally redundant (LRS) storage account.

To learn more about the topic, please refer to the official documentation:

Connectivity features for Linux-based App Services

Azure App Service is one of our favorite ways of running applications in the cloud. It provides scaling, load balancing, nicely integrates with other Azure services, and what I love most of all — it allows us to avoid using Virtual Machines. As great as VMs are for specific use cases, they do introduce a significant management overhead, so if possible, I’m always happy to take a pass.

One of the features that make App Service a secure choice is the so-called “Regional VNET Integration”. When enabled, it forces all outbound traffic via a subnet, also referred to as the integration subnet, of your VNET, thus making it subject to any Network Security Groups (NSGs) and User Defined Routes (UDRs) which might be configured. The app can then use the private network connection to access other VNET-integrated services and communicate with endpoints hosted on-premises (via a VPN or an Express Route connection). UDR support also allows us forcefully tunnel all egress traffic via a Network Appliance such as the Azure Firewall, thus providing a mechanism for secure Internet access.

There are, however, a few things to keep in mind when using Regional VNET integration:

  • It is used only for outbound traffic. Inbound traffic is managed separately — via Access Restrictions.
  • A given subnet can only be used by a single App Service Plan. All apps sharing the plan will share the subnet. The subnet has to be delegated to Microsoft.Web/serverfarms.
  • It is not available for the Free and Shared SKUs of the App Service Plan.
  • You will have to update the configuration of your application to use it — add the WEBSITE_VNET_ROUTE_ALL setting (for more info see docs — link below)

For those who do not want to integrate their App Services with a VNET, but still require secure connectivity to a remote endpoint, we have a feature called Hybrid Connections. It is both quick and easy, and it relies on a relay agent which is installed in the destination environment. The agent establishes a secure (TLS 1.2) outbound connection to the Azure platform, and that channel can be used by the application to access required resources.

Naturally, certain limitations apply. Most notably:

  • UDP is not supported (thus no LDAP)
  • Dynamic TCP ports are not supported
  • AD domain-joins and mounting drives are both a no-go

What amazing is that both features are now generally available for Linux-based web applications.

To learn more, please see the official documentation:

Updates to Azure Application Gateway service

Azure Application Gateway is a regional layer seven load balancer, which offers several great features. I’ve used it many times, and since Microsoft launched v2, really enjoyed the experience. For some customers, however, it lacked certain vital functionalities.

Recently, however, Microsoft released the preview of two such options:

  • URL rewrite -which gives the possibility to rewrite the host name, path and query of the request URL. The action can be applied globally per listener or conditionally based on request properties.
  • Support for wildcard characters — which allows the use of the asterisk (‘*’) and question mark (‘?’) characters in the listener configuration, to enable the acceptance of any requests which match the pattern.

To find out more, please refer to the docs:

Updates to Azure vWAN

Azure Virtual WAN is yet another service which I am very fond of. I’ve covered the topic extensively in a separate blog post, so I will only focus on the recent updates. And those are significant.

Most notably, Azure Firewall Manager, the service which allows us to deploy firewall instances into VWAN hubs has been promoted to general availability. The lack of production support and an SLA on the Firewall Manager was, understandably, a non-discussable impediment for some of the customers that I work with. I’m sure that many of them will be very happy to see the service coming out of preview.

Next to that, the Virtual WAN itself has received the following, now generally available, updates:

  • Hub-to-hub connectivity allowing users to create a global mesh network.
  • Up to 50 Gbps transit connectivity for VNETs connected to the VWAN.
  • VPN and ExpressRoute transit
  • APIPA support for BGP endpoints in VPN connections
  • VMWare Velocloud and Cisco Meraki joined the portfolio of partners supporting automated SD-WAN deployments.

Such a set of generally available features makes Azure Virtual WAN a truly great option for secure and automated global any-to-any networks.

Azure Monitor Community Repo

Azure Monitor is a powerful yet complex service. Especially for inexperienced users, it can quickly turn out very intimidating. Some may even say that it should take a separate job role to take full advantage of all of its capabilities.

Thankfully there is a new resource out there which has the potential to help significantly customers that are still exploring the possibilities of Azure Monitor. The Azure Monitor Community Repository is now available and holds many valuable resources such as Kusto Query Language (KQL) queries, workbooks and alert configurations. Artefacts are organised by both services and scenarios, so hopefully, you’ll be able to find what you’re looking for quickly.

The GitHub repository is still in its infancy but has the potential to become an exchange hub for Azure Monitor users. I think it’s definitely worthy of creating a bookmark in the browser of your choice.

Versioning for BLOB storage is generally available

Binary Large Object, or as we commonly call it, BLOB, storage is undoubtedly one of the essential services offered by any cloud provider. Azure Storage Accounts have been around for years but, unfortunately, they have also lacked certain features that are now considered to be industry standard.

Thankfully, one such feature — versioning — has just been released for general availability. When enabled, it will maintain a version history of all BLOBs in a given Storage Account, allowing users to restore an earlier version or recover from accidental deletion.

Enabling it is simple and free but, naturally, you will pay regular storage costs for every version which is saved. Unfortunately, I haven’t been able to find any notion of an option which would limit the number of versions which are held, so be careful — it might be very easy for the cost to skyrocket when using versioning.

To learn more about the ins and outs of BLOB versioning, please see the official documentation:

--

--