Distributing virtual machines across multiple cluster shared volumes in AKS on Azure Stack HCI

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.

In the July Update of Azure Kubernetes Service (AKS) on Azure Stack HCI we introduced automatic distribution of virtual machine data across multiple cluster shared volumes which makes clusters more resilient to shared storage outages. This post covers how this works and why it’s important for reliability.

 

Just to recap, AKS-HCI is a turn-key solution for Administrators to easily deploy, manage  Kubernetes clusters in datacenters and edge locations, and developers to run and manage modern applications similar to cloud-based Azure Kubernetes Service.  The architecture seamlessly supports running virtualized Windows and Linux workloads on top of Azure Stack HCI or Windows Server 2019 Datacenter. It comprises of different layers which include a management cluster, a load balancer, workload clusters and Cluster Shared Volumes (CSV) which run customer workloads, etc. as shown in the image below. For detailed information on each of these layers visit here.

 

baziwane_0-1631889721470.png

Figure 1: AKS-HCI cluster components.

 

Cluster Shared Volumes allow multiple nodes in a Windows Server failover cluster or Azure Stack HCI to simultaneously have read-write access to the same disk that is provisioned as an NTFS volume. In AKS-HCI, we use CSVs to persist virtual hard disk (VHD/VHDX) files and other configuration files required to run clusters.

 

In past releases of AKS-HCI, virtual machine data was saved on a single volume in the system. This architecture generated a single point of failure - the volume hosting all VM data as shown in Figure 2a. In the event of an outage or failure in this volume, the entire cluster would be unreachable and thus impacting application/pod availability as illustrated in 2b.

 

baziwane_1-1631889721484.png

 Figure 2: Virtual machines on a single volume.

 

Starting with the July release, customers running multiple Cluster Shared Volumes (CSV) in their Azure Stack HCI clusters, by default during a new installation of AKS-HCI, the virtual machine data will automatically be spread out across all available CSVs in the cluster.  What you will notice is a list of folders prefixed with the name auto-config-container-N created on each cluster shared volume in the system.

 

baziwane_2-1631889721498.png

Figure 3: Sample of an auto-config-container-X folder generated by AKS-HCI deployment.

 

Most customers may not have noticed this behavior as it required no changes in the cluster creation user experience; this happens behind the scenes during initial cluster installation. Note that for customers running clusters based on the June or prior releases, an update and clean installation is required for this functionality to be available.

 

To illustrate how this improves the reliability of the system, assuming you have 3 volumes and deploy a cluster with VM data spread out as illustrated in Figure 4a. In the event of an outage or failure in volume 2 the cluster would still be operational as workloads would continue running in the remaining VMs (Figure 4b).

baziwane_3-1631889721506.png

Figure 4: Virtual machines distributed across multiple cluster shared volumes.

 

To learn more about high availability on AKS-HCI, please visit our documentation for a range of topics.

 

Useful links:

Try for free: https://aka.ms/AKS-HCI-Evaluate
Tech Docs: https://aka.ms/AKS-HCI-Docs
Issues and Roadmap: https://github.com/azure/aks-hci
Evaluate on Azure: https://aka.ms/AKS-HCI-EvalOnAzure

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.