Improving customer experiences with F5 NGINX and Windows on Azure Kubernetes Service

This post has been republished via RSS; it originally appeared at: Microsoft Tech Community - Latest Blogs - .

This blog post has been co-authored by Microsoft and Jason Williams from NGINX.


Kubernetes adoption is growing rapidly as organizations realize its benefits of deploying, running, and managing containerized applications and workloads at scale.

However, organizations might face challenges with security, reliability, observability, and scalability in their Kubernetes environments:

  • Service interruptions in scalable, dynamic environments due to connection timeouts and errors.
  • Increased risk of exposure to cybersecurity threats due to insufficient protection across distributed environments.
  • Outages and troubleshooting complexity due to insufficient visibility into app health and performance.

NGINX Ingress Controller deployed in AKS helps address app connectivity challenges in Kubernetes environments with its enterprise-class availability, security, and visibility features:

  • Ensures availability of business-critical apps with advanced load balancing and connectivity patterns.
  • Improves protection with strong centralized security controls at the edge of the Kubernetes cluster.
  • Reduces outages and simplifies troubleshooting with granular real-time and historical metrics and dashboards.

This blog describes how to implement NGINX Ingress Controller in an AKS environment for Windows workloads.



F5 NGINX Ingress Controller overview

NGINX Ingress Controller simplifies and streamlines app connectivity at the edge of a Kubernetes cluster, providing higher degrees of security, availability, and observability at scale. It offers the following key features and capabilities:


Prevent connection timeouts and errors and avoid downtime when rolling out a new version of an app or during topology changes, extremely high request rates, or service failures.

  • Advanced Layer 7 (HTTP/HTTPS, HTTP/2, gRPC, WebSocket) and Layer 4 (TCP/UDP) load balancing with active health checks
  • Blue-green and canary deployments
  • Rate limiting and circuit breaker connectivity patterns


Ensure holistic app security with user and service identities, authorization, access control, encrypted communications, and Layer 7 app protection.

  • HTTP basic authentication, JSON Web Tokens (JWTs), OpenID Connect (OIDC), and role-based access control (RBAC)
  • End-to-end encryption (SSL/TLS passthrough, TLS termination)
  • OWASP Top 10 and Layer 7 DoS defense through integration with optional NGINX App Protect modules


Gain better insight into app health and performance with more than 200 granular real-time and historical metrics to reduce outages and simplify troubleshooting.

  • Discover problems before they impact your customers
  • Find the root cause of app issues quickly
  • Integrate data collection and representation with ecosystem tools, including OpenTelemetry, Grafana, Prometheus, and Jaeger

In this section, we describe step-by-step how to deploy NGINX Ingress Controller in a mixed-cluster mode in AKS.

NGINX Ingress Controller can be found in the nginxinc/kubernetes-ingress repo on GitHub, and it is developed and maintained by F5 NGINX with docs on It is available in two editions:

  • NGINX Open Source‑based (free and open source option)
  • NGINX Plus-based (commercial option)

For more information, read the blog A Guide to Choosing an Ingress Controller, Part 4: NGINX Ingress Controller Options.


NGINX Ingress Controller + Windows on Azure Kubernetes Service

You can use NGINX Ingress controller to manage connectivity to your Windows applications running on Windows nodes in a mixed-node AKS clusters. A mixed-mode cluster consists of a AKS cluster with Linux and Windows nodes. While your application will run on Windows nodes with Windows containers, the NGINX infrastructure pods will run on the Linux nodes. In order to do so, we are going take advantage of an existing Kubernetes feature:  label.

This built-in Kubernetes capability allows users to ensure that their deployments will be scheduled on specific nodes, based on the criteria defined in a deployment. In our example, we are going to use the following sample in our deployment:

 nodeSelector: linux

The above is telling the Kubernetes scheduler: "only deploy our application where the nodes have the label linux.”

Manifest example setup

apiVersion: apps/v1
kind: Deployment
  name: nginx-ingress
  namespace: nginx-ingress
  replicas: 1
      app: nginx-ingress
        app: nginx-ingress nginx-ingress
      nodeSelector: linux
      serviceAccountName: nginx-ingress
      automountServiceAccountToken: true


Helm example setup

If you are using helm for your deployment, one way to add linux to your helm deployment is to edit the values.yaml and add the setting under nodeSelector.

Here is a small snippet in the values.yaml file:

## The node selector for pod assignment for the Ingress Controller pods.
nodeSelector: { linux }
Sample snippet values.yaml using nodeSelector settings.
  name: controller
  kind: deployment
  annotations: {}
  nginxplus: false
  nginxReloadTimeout: 60000
  hostNetwork: false
  dnsPolicy: ClusterFirst
  nginxDebug: false
  logLevel: 1
  customPorts: []
    repository: nginx/nginx-ingress
      tag: "3.1.0"
    pullPolicy: IfNotPresent
    annotations: {}
    entries: {}

  ## Here is where we use ` linux` value to ensure NGINX Ingress controller is scheduled on Linux nodes.
  nodeSelector: { linux }

With the values.yaml file updated, you can now install NGINX Ingress Controller using helm.

NOTE: The below command assumes you have cloned the NGINX Ingress Controller repo and are in the directory where the values.yaml file is located.

helm install nginx01 -n nginx-ingress --create-namespace -f values.yaml


Windows workload and NGINX

Once you have NGINX Ingress Controller deployed, you can now create resources to route traffic to your Windows-based application. In our example, we are using F5 NGINX Ingress Controller CRDs (custom resource definitions) to create a VirtualServer. Our CRDs allow for powerful layer 7 routing capabilities to increase productivity and performance.

Below is an example of deploying a VirtualServer resource into the AKS cluster for your Windows application.

kind: VirtualServer
  name: webapp
  - name: webapp
    service: webapp-svc
    port: 80
  - name: coffee
    service: coffee-svc
    port: 80
  - path: /
      pass: webapp
  - path: /coffee
      pass: coffee

In this example, we have a single host supporting two different paths, with two potential applications.

Requests made to that match on the / path, will be proxied to the webapp upstream, which connects to the webapp-svc service application. Requests made to that match on /coffee path, will be proxied to the coffee upstream, which connects to the coffee-svc service application.

The VirtualServer and VirtualServerRoute CRDs provide many additional settings that allow you to fine-tune proxied requests to your application.

Some examples include:

  • Advanced header (request and response)
  • OIDC authentication
  • JWT authorization
  • Connection timeouts
  • Keepalives
  • Max connections allowed to a upstream
  • Define load balancing methods
  • mTLS, both ingress and egress for end-to-end encryption, from client to upstream application pod

Access control (allow/deny)

Additional information can be found on our GitHub repo.



Deployed at the edge of a Kubernetes cluster in AKS, NGINX Ingress Controller helps improve customer experiences with reduced complexity, increased uptime, and detailed real-time visibility at scale. These capabilities are available for both Windows and Linux workloads. For more information, visit the NGINX Ingress Controller product page.

Leave a Reply

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.