What’s Cool in Azure — New Year’s edition

Radosław Wiankowski
5 min readFeb 5, 2020

The trip to Orlando to attend Ignite in early November was an introduction to a genuinely hectic period for me. The last two months of the year usually tend to be very busy in our industry as customers aim to finalise projects before the holiday season begins, so I’m pretty sure that many of you were also rather busy.

After that, I took some much needed time offline, so as a result, I wasn’t able to deliver my monthly subjective compilation of what is new and noteworthy in the Azure space for two months in a row.

With that said, I’m happy to present you with a special “New Year’s Editon”, in which I go over a few topics inspired by the announcements from Ignite and cover exciting news from both December 2019 and January 2020.

Spot instances of Virtual Machines are now in preview

At the beginning of December, Microsoft finally delivered a long-awaited feature — spot instances for Virtual Machines. The feature has been available in AWS for a long time now, and many users had a hard time understanding why Azure took so long to catch up.

Spot instances allow users to purchase excessive compute power at a discounted price, but at the risk of the workload being interrupted if the provider needs to claim the capacity back. Also, Microsoft does not provide an SLA for those machines. For specific applications, such risk is not a problem, and then organisations can expect some exciting discounts.

The rules of market conditions govern spot instances, so the price is dynamically calculated based on availability and demand. Users can configure a maximum amount which they are willing to pay for a given machine, and if the price is exceeded, the VM will be deallocated. We are supposed to get a thirty-second notice if that is about to happen, so I imagine it would be possible to use EventGrid to build some additional orchestration for applications using spot instances.

There are certain limitations, for example regarding the usage of ephemeral disks, but .despite being a preview feature, the option is available in most regions globally.

For more information, refer to the official documentation:

Azure Private Link support for AKS and more

Azure Private Link, a service which I am very enthusiastic about, now supports additional PaaS offerings.

Adding to the existing portfolio of supported Azure Database flavours, now in preview, we can connect to MySQL, PostgreSQL and MariaDB instances using a Private Endpoint. With that, almost all managed database services can take advantage of increased network security.

What is even more exciting is support for Azure Kubernetes Services, also in preview. Using Azure Private Link, we can create what is called a Private Cluster. In practice this means that the control plane of the cluster is no longer be exposed publicly; it only has a private, internal IP address.

As a result, all management traffic and all data exchange between the API server and worker nodes will only flow over the internal network. For several larger organisations, which have rigorous security policies, Private Link opens up the discussion regarding using AKS in their enterprise landscapes. As such, this is a very significant development.

As a natural consequence, all management has to be performed from a virtual machine connected to the virtual network, which can reach the Private Endpoint of the cluster. Such an architecture, however, is already common practice, especially within organisations which will look to deploy private clusters.

As with every preview service, there are certain limitations, which might bring your enthusiasm a bit down. The ones which I find notable are:

  • Availability Zones are not supported
  • Azure DevOps integration is not supported
  • Existing clusters cannot be converted into private ones

As with many preview features, to use Private Clusters, you will have to register a preview feature within your subscription. One thing to keep in mind is that once the registration is done, you cannot un-register the feature, so I do recommend testing the functionality in a separate subscription.

Also, the list of supported regions is still rather short, but most of the usual suspects are included.

All prerequisites, instructions and limitations are described in detail here:

Proximity Placement Groups

For those who are ever in pursuit of better performance, the announcement of proximity placement groups reaching general availability was very exciting. This feature will prove its value, especially when deploying applications for which every millisecond of network latency matters. ERP applications like SAP are a good example.

What a Proximity Placement Group does for us, is it makes sure that our Virtual Machines get deployed physically as close to each other as possible. This way, the network latency between them will be as low as possible to achieve.

Without using proximity groups, individual machines from VM scale sets or availability sets can not only land of opposite sides of a datacentre but even different datacentres whatsoever. Historically we would use unofficial ways of increasing the chances that Virtual Machines get deployed close to each other, but we had no ways or guaranteeing that is would actually happen.

Currently, co-location is available only for IaaS workloads, but the team behind Azure Compute has PaaS offerings on their roadmap.

Using this feature requires a certain degree of planning and awareness regarding the availability of different VM SKUs across regions. To make the experience as stress-free as possible, Microsoft recommends that users start with the most exotic instance configurations. Those can be available only in certain datacentres, so if a proximity group would get created in a location which does not support a specific SKU, the deployment would fail.

To read more about the recommendations for Proximity Placement Groups, I recommend the following blog post from Microsoft:

To dive deeper please refer to the official documentation:

--

--