This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Community Hub.
While it's possible to run the Kubernetes nodes either in on-demand or spot node pools separately, we can optimize the application cost without compromising the reliability by placing the pods unevenly on spot and OnDemand VMs using the topology spread constraints. With baseline amount of pods deployed in OnDemand node pool offering reliability, we can scale on spot node pool based on the load at a lower cost.
In this post, we will go through a step by step approach on deploying an application spread unevenly on spot and OnDemand VMs.
- Azure Subscription with permissions to create the required resources
- Azure CLI
- kubectl CLI
1. Create a Resource Group and an AKS Cluster
Create a resource group in your preferred Azure location using Azure CLI as shown belowaz group create --name CostOptimizedK8sRG --location westeurope --tags 'Reason=Blog'
Let's create an AKS cluster using one of the following commands.
2. Create two node pools using spot and OnDemand VMs
3. Deploy a sample application
4. Update the application deployment using topology spread constraints
requiredDuringSchedulingIgnoredDuringExecutionwe ensure that the pods are placed in nodes which has deploy as key and the value as either spot or ondemand. Whereas for
preferredDuringSchedulingIgnoredDuringExecutionwe will add weight such that spot nodes has more preference over OnDemand nodes for the pod placement.
topologySpreadConstraintswith two label selectors. One with deploy label as topology key, the attribute
maxSkewas 3 and
whenUnsatisfiablewhich ensures that not less than 3 instances (as we use 9 replicas) will be in single topology domain (in our case spot and ondemand). As the nodes with
spotas value for deploy label has the higher weight preference in node affinity, scheduler will most likely will place more pods on spot than OnDemand node pool. For the second label selector we use
topology.kubernetes.io/zoneas the topology key to evenly distribute the pods across availability zones, as we use
whenUnsatisfiablescheduler won't enforce this distribution but attempt to make it if possible.
maxSkewconfiguration in topology spread constraints is the maximum skew allowed as the name suggests, so it's not guaranteed that the maximum number of pods will be in a single topology domain. However, this approach is a good starting point to achieve optimal placement of pods in a cluster with multiple node pools.