Site icon TheWindowsUpdate.com

Accelerating Distributed Training in Azure Machine Learning service using SR-IOV

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.

Author: Ravi Shankar Kolli

This post is co-authored by Mathew Salvaris, Aashna Garg, Vaibhav Jain, Reyhan Patia, Caghan Demirci, Alex Sutton

 

Today’s state of the art deep learning models like BERT require distributed multi machine training to reduce training time from weeks to days. Interconnect is one of the key components to reduce communication overhead and achieve good scaling efficiency in distributed multi machine training.

Azure Machine Learning users can now speed up their training time by taking advantage of the Azure Virtual Machines  with SR-IOV and InfiniBand support. In September 2018, Azure introduced the NC, ND, and H-series of VMs dedicated InfiniBand networks. All RDMA-enabled sizes are capable of leveraging that network using Intel MPI. SR-IOV stands for “single root input/output virtualization” which optimizes sharing of PCI Express devices in a system with virtual machines. In Azure, SR-IOV for InfiniBand enables near bare-metal performance for any MPI library.

MPI, or message-passing interface, is a communication library commonly used for distributed training between GPUs on many systems. Nvidia’s NCCL software uses MPI to make distributed training easier in deep learning frameworks like PyTorch and TensorFlow.  

Azure now supports using any MPI library with SR-IOV enabled VM families such as NCv3, NDv2, and HC or HB for HPC applications. Older GPU hardware with InfiniBand such as NCv2 and NDv1 will be updated for SR-IOV in 2020.  

Intel MPI version 5.x will continue to be supported as will all subsequent Intel MPI versions.  In addition, all other MPIs supported by the Open Fabric Enterprise Distribution (OFED), OpenMPI, and Nvidia’s NCCL2 library, providing optimized performance for GPUs are supported.

These enhancements will provide customers with higher InfiniBand bandwidth, lower latencies, and most importantly, better distributed application performance. Infiniband connectivity provides higher throughput and lower latencies compared to the ethernet based connection. SR-IOV enables communication over an Infiniband network using any flavor of MPI. A reference implementation of Bert in Azure Machine Learning using SR-IOV and Infiniband can be found on Github.

 

Throughput Improvement in BERT

SR-IOV and Infiniband provided up to 75% improvement in the throughput of BERT Large model. When SR-IOV is enabled, throughput improves to about 28 sequences/second/GPU which is 75% better than the baseline. Below charts show the throughput improvement of BERT large pretraining on 16 Azure StandardNC24s_v3 VMs. Model is in PyTorch and used Torch.Distributed and Open MPI for multi-node training. Note that the below charts do not reflect the best throughput of BERT on Azure.

 

 

Throughput Improvement in ResNet

In order to observe the improvements in speed for PyTorch we ran a selection of ResNet models from Torchvision on synthetic data at full precision. This allowed us to estimate the throughput without having to worry about IO overhead. Below we can see figures for clusters with SR-IOV enabled vs those that didn’t have SR-IOV. We were using NC24rs_v3 vms each equipped with 4 V100 GPUs. Therefore, when we report 8 GPUs it is across 2 nodes and 16 is across 4. We can see that across models and GPU configurations SR-IOV offers 2-3 times improvement over No SR-IOV.

In the figures below the number reported in the center of the bar is the scaling efficiency on RDMA-enabled VMs. As we can see for Horovod and DistributeDataParallel both using NCCL the scaling efficiency is over 90% across all three models with the performance almost doubling with the doubling of GPUs. 

 

Summary

SR-IOV yielded significant throughput improvements to distributed multi machine training. Bert large throughput increased by 75% with SR-IOV and certain Resnet models were faster by about 2-3x with SR-IOV. Throughput also scaled linearly on Resnet models as the number of NC24rs_v3 nodes scaled from 1 to 2, 4 and 8 instances.

 

Stay tuned for our next blog on scaling distributed deep learning training on Azure NDv2 VMs. These VMs feature 8 NVIDIA Tesla V100 NVLINK interconnected GPUs, 32GB HBM2 memory per GPU and 100Gbps EDR Infiniband interconnect.

 

Get started with Distributed Deep Learning training on Azure Machine Learning. Report any implementation issues or observed throughput improvements of SR-IOV on Azure Machine Learning at Stack Overflow.

Exit mobile version