Network start-up and performance improvements in Windows 10 April 2018 Update and Windows Server, version 1803

This post has been republished via RSS; it originally appeared at: Networking Blog articles.

First published on TECHNET on Apr 27, 2018
Increased container density, faster network endpoint creation time, improvements to NAT network throughput, DNS fixes for Kubernetes, and improved developer features


A lot of enthusiasm and excitement surrounds the highly anticipated quality improvements to the container ecosystem on Windows; all shipping with Windows Server version 1803 (WS1803) and Windows 10 April 2018 Update. The range of improvements span long-awaited networking fixes, enhanced scalability and efficiency of containers, as well as new features to make the suite of container networking tools offered to developers more comprehensive. Let's explore some of these improvements and uncover how they will make containers on Windows better than ever before!

Improvements to deviceless vNICs


Deviceless vNICs for Windows Server Containers removes the overhead of using Windows PNP device management to make both endpoint creation and removal significantly faster. Network endpoint creation time in particular can have a notable impact on large-scale deployments, where scaling up and down can add unwanted delay. Windows 10 April 2018 Update and WS1803 achieves better performance than its predecessors, as the data below will show.

WS1803 is Microsoft’s best-of-breed release to date in terms of providing a seamless scaling experience to consumers that expect things to “just work” in a timely fashion.

To summarize the impact of these improvements:

  • Increased scalability of Windows Server Containers from 50 to 500 containers on one host with linear network endpoint creation cost

  • Decreased Windows Server Container start-up time with 30% improvement in network endpoint creation time and 50% improvement in time taken for container deletion


Before vs. after


As discussed above, container vNIC creation and deletion was one of the identified bottlenecks for scaling requirements of powerhouse enterprises today. In previous Windows releases, with PNPs required for container instantiation, we saw on average 10 container creations fail out of 500 . Now, with deviceless vNIC’s, we don’t see any failures for 500 container creations.

See the graphs below for a quick visualization of the trends discussed:

[caption id="attachment_4935" align="alignnone" width="512"] Figure 1 - Container Creation: PNP vs. deviceless vNICs[/caption]

[caption id="attachment_4945" align="alignnone" width="480"] Figure 2 - Container Deletion: PNP vs. deviceless vNICs[/caption]

In addition to this, check out the stress test below that captures the new, lightning-fast multi-container deployment creation time!

Stress test: c ontainer endpoint creation time



Description
PowerShell script that creates and starts specified amount of recent microsoft/windowsservercore Windows Server containers (build 10.0.17133.73) on a Windows Server, version 1803 host (build 17133) using the default “NAT” network driver.
Hardware specification

  • C6220 Server

  • Storage: 1 400GB SSD

  • RAM: 128GB

  • CPU: 2x E5-2650 v2 2.6Ghz 16c each (32c total)

  • Networking: 1 GB Intel(R) I350 Gigabit Network Connector


Test results
Number  of Containers Average HNS endpoint creation time (switch+port+vfp) (ms)
10 104.6
50 126.28
100 150.3

Figure 3 – Table of HNS endpoint creation time. Wondering what HNS is? See here

Container endpoint creation time (ms) vs. number of container instances
[caption id="attachment_4975" align="alignnone" width="658"] Figure 4 – Stress test: container endpoint creation time graph[/caption]

Test discussion


The results show that container creation performance follows a stable linear trend, with creation time scaling to an average of 150ms on servers with 100 endpoints .

In other words, on our experimental hardware we can roughly estimate Windows server container creation time “ t ” against number of endpoints on server “n” very easily using the simple relationship t = n/2 +100 .

This shows that the daunting task of twiddling your thumbs waiting for deployment to finally launch is much more agreeable and foreseeable on WS1803.


NAT Performance Improvements


Several bespoke Windows use-cases including Windows Defender Application Guard in the Edge web browser or Docker for Windows rely heavily on network address translation (NAT), so investments into one comprehensive and performant NAT solution is another built-in benefit of moving to this new release.

Alongside improvements in deviceless vNICs, here are some additional optimizations which are applicable to the NAT network datapath:

  • Optimizations (CPU utilization) of machinery used for translation decisions of incoming traffic

  • Widened network throughput pipeline by 10-20%


This alone is already a great advocate for moving to the new release, but watch this space for even more awesome optimization goodies ( in the near future!) that are actively being engineered!


Improvements to Developer Workflows and Ease of Use


In previous Windows releases, there existed gaps to the flexibility and mobility needs of modern developers and IT admins. Networking for containers was one such space where gaps were identified that prevented both developers and IT admins from having a seamless experience with containers; they couldn’t confidently develop containerized applications due to a lack of development convenience and network customization options . The goal in WS1803 was to target two fundamental areas of the developer experience around container networking that need improvement— localhost/loopback support, and HTTP proxy support for containers.

1.     HTTP proxy support for container traffic


In WS1803 and Windows 10 April 2018 Update, functionality is being added to allow container host machines to inject proxy settings upon container instantiation, such that container traffic is forced through the specified proxy. This feature will be supported on both Windows Server and Hyper-V containers, giving developers more control and flexibility over their desired container network setup.

While simple in theory, this is easiest to explain with a quick example.

Let’s say we have a host machine configured to pass through a proxy that is reachable under proxy.corp.microsoft.com and port number 5320 .  Inside this host machine, we also want to create a Windows server container, and force any north/south traffic originating from the containerized endpoints to pass through the configured proxy.

Visually, this would look as follows:

[caption id="attachment_4955" align="alignnone" width="1392"] Figure 5 - Container proxy configuration[/caption]



The corresponding actions to configure Docker to achieve this would be:

For Docker 17.07 or higher:

  • Add this to your config.json:


{
"proxies": {
"default": {
"httpProxy": "http://proxy.corp.microsoft.com:5320"
}
}
}
For Docker 17.06 or lower:

  • Run the following command:


docker run -e "HTTP_PROXY=http://proxy.corp.microsoft.com:5320 " -it  microsoft/windowsservercore <command>
Diving deeper from a technical standpoint, this functionality is provided through three different registry keys that are being set inside the container :

  1. Software\Microsoft\Windows\CurrentVersion\Internet Settings\Connections\[DefaultConnectionSettings\WinHttpSettings

  2. Software\Microsoft\Windows\CurrentVersion\Internet Settings\Connections\DefaultConnectionSettings

  3. Software\Policies\Microsoft\Windows\CurrentVersion\Internet Settings




The configured proxy settings inside the container can then be queried using the command:
netsh winhttp show proxy
[caption id="attachment_4965" align="alignnone" width="524"] Figure 6 – Viewing container proxy configuration[/caption]



That’s it! Easy, right? The instructions to configure Docker to use a proxy server can be found in the Docker documentation .

The preliminary PR can be tracked here .

2.     Localhost/loopback support for accessing containers


New with the Windows 10 April 2018 Update and WS1803 release is also support for being able to access containerized web services via “localhost” or 127.0.0.1 (loopback). Please see this blog post that does an excellent job portraying the added functionality. This feature has already been available to Windows Insiders via Build 17025 on Windows 10 and Build 17035 on Windows Server.


Networking Quality Improvements


One of the most important considerations of both developers and enterprises is a stable and robust container networking stack. Therefore, one of the biggest focus areas for this release was to remedy networking ailments that afflicted prior Windows releases, and to provide a healthy, consistent, and sustainable networking experience of the container ecosystem on Windows.

Windows 10 April 2018 Update and WS1803 users can expect the following:

  • Greatly stabilized DNS resolution within containers out-of-the-box

  • Enhanced stability of Kubernetes services on Windows

  • Improved recovery after Kubernetes container crashes

  • Fixes to address and port range reservations through WinNAT

  • Improved persistence   of containers after Host Networking Service (HNS) restart

  • Improved persistence of containers after unexpected container host reboot

  • Better overall resiliency of NAT networking


We continue being dedicated to stamp out pesky networking bugs. After all, sleepless nights playing whack-a-mole with HNS  are no fun (even for us). If you still face container networking issues on the newest Windows release, check out these preliminary diagnostics and get in touch !

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.