Networking in Red Hat OpenShift for Windows

This post has been republished via RSS; it originally appeared at: Networking Blog articles.

First published on TECHNET on Dec 06, 2018
Hello again,

Today we will be drilling into a more complex topic following the introduction to Red Hat OpenShift for Windows on premises two weeks ago. We will expand into the networking layer of the architecture that we have chosen for the current developer previews.

You may ask yourself "Why do I care about how networking works?"
The obvious answer would be "Without it your container cannot listen or talk much to others."
What do I mean by that; networking is the backbone of any IT infrastructure and container deployments are no different from that. The various networking components allow communication of containers, nodes, pods, clusters amongst each other and the outside world.

As a DevOps you will need to have a core understanding of the networking infrastructure pieces that are deployed in your container infrastructure and how they interact, be it bare-metal, VMs on a virtualization host or in one of the many cloud services provided so you can tailor the network setup to your needs.

Terminology


First let's cover a few buzzwords, TLAs (Three letter acronyms) and other complex things so we are all on the same page
Terminology Description
CNI Container Networking Interface, a specification of a standardized interface defining the container endpoint and its interaction with the node the container runs on.
Docker A popular container runtime.
vSwitch Virtual Switch, the central component in container networking. Every container host has one. It serves up the basic connectivity for each container endpoint. On the Linux side it resembles somewhat to a Linux Bridge.
NAT Network Address Translation. A way to isolate private IP address spaces across multiple hosts and nodes behind a public IP Address space
Pod the smallest atomic unit in a Kubernetes Cluster. A Pod can host one or more containers. All Containers in a pod share the same IP address
Node An infrastructure component hosting one or more pods.
Cluster An infrastructure component comprised of multiple nodes.
HNS Host Network Service, a windows component interacting with the networking aspects of the Windows container infrastructure
HCS Host Compute Service, a Windows component supporting the interactions of the container runtime with the rest of the operating system
OVN Open Virtual Network. OVN provides network virtualization to containers. In the "overlay" mode, OVN can create a logical network amongst containers running on multiple hosts. In this mode, OVN programs the Open vSwitch instances running inside your hosts. These hosts can be bare-metal machines or vanilla VMs. OVN uses two data stores the Northbound (OVN-NB) and the Southbound  (OVN-SB) data store.
ovn-northbound

  • OpenStack/CMS integration point

  • High-level, desired state

    • Logical ports -> logical switches -> logical routers




ovn-southbound

  • Run-time state

  • Location of logical ports

  • Location of physical endpoints

  • Logical pipeline generated based on configured and run-time state


OVS Open Virtual Switch. Open vSwitch is well suited to function as a virtual switch in VM environments. In addition to exposing standard control and visibility interfaces to the virtual networking layer, it was designed to support distribution across multiple physical servers.

Here is how all these components fit into the architecture on the Windows worker node. I will talk more about them through out the post.

[caption id="attachment_7815" align="aligncenter" width="258"] OpenShift for Windows Networking components[/caption]

OK, now that we are on the same page let's dive in.

Setup


To recap from the last post, we will have a Linux Red Hat OpenShift Master node which also serves as the Kubernetes Master and a Windows Server Core Worker node which is joined to the Master. The deployment will also use the Docker container runtime on both the Linux and the Windows Node to instantiate and execute the containers.
You can deploy the nodes in one VM host, across multiple VM hosts, bare metal and also deploy more than two nodes in this environment. For the purpose of this discussion we have deployed a separate VM host and will use it to host both the Linux and the Windows Node.
Next lets dig into the networking and how the networks are created and how the traffic flows.

Networking Architecture


The image below shows the networking architecture in more detail and zooms into the above picture both on the Linux node and the Windows node.
Looking at the diagram below we can see that there are several components making up the networking layer

[caption id="attachment_7835" align="aligncenter" width="879"] OpenShift for Windows Networking Architecture[/caption]

The components can be grouped into several groups:

  • Parts which are Open Source components (light orange)

  • Parts which are in the core Windows Operating System (bright blue).

  • Parts which are Open Source and Microsoft made specific changes to the code and shared them with the community (light blue).


On the Linux side Open Source Components are the container runtime like the Docker Engine, Kubernetes components such as

  • kube-proxy - (Kubernetes network proxy) which runs on each node and reflects services as defined in the Kubernetes API on each node for traffic forwarding across a set of backends.

  • kubelet - is the primary “node agent” that runs on each node. The kubelet works by reading a PodSpec object which is a YAML or JSON document that describes a pod.

  • To find out more about Kubernetes components on Linux check the Kubernetes documentation here .


On the Windows side some of these components like the kube-proxy and the kubelet have been enhanced by Microsoft to work with the Microsoft networking components such the Host Compute Service (HCS) and the Host Network Service (HNS). These changes are made to allow the interoperability with Windows core services and also the abstraction of the differences in the underlying architecture.

On the Windows side some of these components like the kube-proxy and the kubelet have been enhanced by Microsoft to work with the Microsoft networking components such the Host Compute Service (HCS) and the Host Network Service (HNS). These changes are made to allow the interoperability with Windows core services and the abstraction of the differences in the underlying architecture.

One of the differences between Linux Nodes and Windows Nodes in this system is the way the nodes are joined to the Kubernetes cluster. In Linux you would use a command like
kubeadm join 10.127.132.215:6443 --token <token> --discovery-token-ca-cert-hash <cert hash>

On Windows where the kubeadm command is not available the join is handled by the Host Compute Service when the resource is created.

The key takeaway of the discussion here is that overall the underlying architectural differences between Linux and Windows are abstracted and the process of setting up Kubernetes for Windows and managing the networking components of the environment is going to be straight forward and mostly familiar if you have done it on Linux before.
Also since Red Hat OpenShift calls into Kubernetes the administrative experience will be uniform across Windows and Linux Nodes.
That being said, be what we are discussing today is the architecture of the currently available developer preview. Microsoft and Red Hat are working to completed work to integrate the Windows CNI into the flow to replace OVN/OVS. We will keep the support for OVN/OVS and also add other CNI plugins as we progress but will switch to Windows CNI during the first half of 2019. So be on the lookout for an update on that.

To say it with a famous cartoon character of my childhood "That's all folks!"

Thanks for reading this far and see you next time.

Mike Kostersitz

P.S.: If this post was too basic or too high-level. Stay tuned for a deeper dive into Windows Container Networking Architecture and troubleshooting common issues coming soon to this blog near you.

Editors Note: Fixed a typo

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.