Top 10 Networking Features in Windows Server 2019: #6 High Performance SDN Gateways

This post has been republished via RSS; it originally appeared at: Networking Blog articles.

First published on TECHNET on Aug 15, 2018
Share On: Twitter Share On: LinkedIn

This blog is part of a series for the Top 10 Networking Features in Windows Server 2019!
-- Click HERE to see the other blogs in this series.

Look for the Try it out sections then give us some feedback in the comments!
Don't forget to tune in next week for the next feature in our Top 10 list!
Organizations today deploy their applications across multiple clouds including on-premises private clouds, service provider clouds, and public clouds such as Azure. In such scenarios, enabling secure, high-performance connectivity across workloads in different clouds is essential. Windows Server 2019 brings huge SDN gateway performance improvements for these hybrid connectivity scenarios, with network throughput multiplying by up to 6x!!!

If you have deployed Software Defined Networking (SDN) with Windows Server 2016, you must be aware that, amongst other things, it provides connectivity between your cloud resources and enterprise resources through SDN gateways. In this article, we will talk about the following capabilities of SDN gateways:

  • IPsec tunnels provide secure connectivity over the Internet between your hybrid workloads

  • GRE tunnels provide connectivity between your workloads hosted in SDN virtual networks and physical resources in the datacenter/high speed MPLS networks. More details about GRE connectivity scenarios here .


In Windows Server 2016, one of the customer concerns was the inability of SDN gateway to meet the throughput requirements of modern networks. The network throughput of IPsec and GRE tunnels was limited, with the single connection throughput for IPsec connectivity being about 300 Mbps and for GRE connectivity being about 2.5 Gbps.

We have improved significantly in Windows Server 2019, with the numbers soaring to 1.8 Gbps and 15 Gbps for IPsec and GRE connections , respectively. All this, with huge reductions in the CPU cycles/per byte , thereby providing ultra-high-performance throughput with much less CPU utilization .

Let's talk numbers


We have done extensive performance testing for the SDN gateways in our test labs. In the tests, we have compared gateway network performance with Windows Server 2019 in SDN scenarios and non-SDN scenarios. The results are shown below:
GRE Performance Numbers
Network throughput for GRE tunnels in Windows Server 2019 without SDN varies from 2 to 5 Gbps, with SDN it leapfrogs to the range of 3 to 15 Gbps!!!

Note that the network throughput in Windows Server 2016 is much less than network throughput in
Windows Server 2019 without SDN. With Windows Server 2019 SDN, the comparison is even more stark.





The CPU cycles/byte without SDN varies from 50 to 75, while it barely crosses 10 with SDN!!!




IPsec Performance Numbers
For IPsec tunnels, the Windows Server 2019 SDN network throughput is about 1.8 Gbps for 1 tunnel and about 5 Gbps for 8 tunnels . Compare this to Windows Server 2016 where the network throughput of a single tunnel was 300 Mbps and the aggregate IPsec network throughput for a gateway VM was 1.8 Gbps.





The CPU cycles/byte without SDN varies from 50 to 90, while it is well within 50 with SDN!!!




With GRE, the aggregate SDN gateway network throughput scales to 15 Gbps and with
IPsec, it can scale to 5 Gbps!!!
Test Setup
The test setup simulates connectivity between the SDN gateway and on-prem gateway in a private lab environment. The on-prem gateway is configured with Windows Routing and Remote Access (RAS) to act as a VPN Site-to-Site endpoint. Following are the setup details on the SDN gateway host and the SDN gateway VM:

Gateway HOST

  1. There are two NUMA nodes on the host machine with 8 cores per NUMA node. RAM on the gateway host is 40 GB. The gateway VM has full access to one NUMA node. And it is different from the NUMA node used by the host.

  2. Hyper threading is disabled

  3. Receive side buffer and send side buffer on physical network adapters is set to 4096

  4. Receive side scaling (RSS) is enabled on the host physical network adapters. Min and max processors are set to be from the NUMA node which the host is affinitized to. MaxProcessors is set to 8 (number of cores per NUMA node).

  5. Jumbo packets are set on the physical network adapters with value of 4088 bytes

  6. Receive Side Scaling is enabled in the vSwitch.


Gateway VM

  1. The gateway VM is allocated 8 GB of memory

  2. For the Internal and External network adapters, the Send Side Buffer is configured with 32 MB of RAM and Receive Side Buffer is configured with 16 MB of RAM

  3. Forwarding Optimization is enabled for the Internal and External network adapters.

  4. Jumbo packets are enabled on the Internal and External network adapters with value of 4088 bytes

  5. VMMQ is enabled on the internal port of the VM

  6. VMQ and VRSS is enabled on the external network adapter of the VM


See it in action


The short demo below showcases the improved performance throughput with Windows Server 2019. This demo uses a performance tool called ctsTraffic to measure the network throughput of a single IPsec connection through the SDN VPN gateway. Traffic is being sent from a customer workload machine in the SDN network to an on-prem enterprise resource across a simulated Internet. As you can see, with Windows Server 2016, the network throughput of a single IPsec connection is only about 300 Mbps, while with Windows Server 2019, the network throughput scales to about 1.8 Gbps.




Try it out


For GRE connections, you should automatically see the improved performance once you deploy/upgrade to Windows Server 2019 builds on the gateway VMs. No manual steps are involved.

For IPsec connections, by default, when you create the connection for your virtual networks you will get the Windows Server 2016 data path and performance numbers. To enable the Windows Server 2019 data path, you will need to do the following:

  1. On an SDN gateway VM, go to Services console (services.msc).

  2. Find the service named “Azure Gateway Service”, and set the startup type of this service to “Automatic”

  3. Restart the gateway VM. Note that the active connections on this gateway will be failed over to a redundant gateway VM

  4. Repeat the previous steps for rest of the  gateway VMs


NOTE: For best performance results, ensure that the cipherTransformationConstant and
authenticationTransformConstant in quickMode settings of the IPsec connection uses the “ GCMAES256 ” cipher suite.
One more thing: To get maximum performance, the gateway host hardware must support AES-NI and PCLMULQDQ CPU instruction sets. These are available on any Westmere (32nm) and later Intel CPU except on models where AES-NI has been disabled. You can look at your hardware vendor documentation to see if the CPU supports AES-NI and PCLMULQDQ CPU instruction sets.



Ready to give it a shot!?   Download the latest Insider build and Try it out!



We value your feedback


The most important part of a frequent release cycle is to hear what’s working and what needs to be improved, so your feedback is extremely valued.

Contact us if you have any questions or having any issues for your deployment or validation. We also encourage you to visit send us email - sdninsider@microsoft.com to collaborate, share and learn from other customers like you.

Thanks for reading,

Anirban Paul

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.