IntelMPI 2019 on Azure HPC Clusters

This post has been republished via RSS; it originally appeared at: Azure Compute articles.

Intel MPI Library is a high-performance interconnect-independent multi-fabric library implementation of the industry-standard Message Passing Interface, v3.1 (MPI-3.1).

 

Intel MPI uses OFI Libfabric as the communication runtime from 2019 release onwards. Libfabric provides two different network providers for InfiniBand - "verbs" provider and "mlx" provider. The verbs provider is implemented over InfiniBand verbs (ibverbs) interfaces whereas the mlx provider is implemented over OpenUCX. The network provider can be selected at runtime using the environment provider FI_PROVIDER.

 

To select verbs provider:

 

FI_PROVIDER=verbs

 

 

To select mlx provider:

 

FI_PROVIDER=mlx I_MPI_OFI_EXPERIMENTAL=1

 

 

 

Performance Expectations:

 

The following figures depict the point-to-point MPI performance using IntelMPI 2019, using verbs and mlx providers. These were taken using OSU MicroBenchmarks on two Azure HBv2 VM instances running CentOS HPC 8.1 VM image. Intel MPI version used is Intel MPI 2019 Update 7. The two host nodes are connected to the same leaf InfiniBand switch.

 

osu_latency-small-impi.PNG

 

 

osu_bw-impi.PNG

 

 

osu_bibw-impi.PNG

 

 

Conclusion:

 

This blog lists configuration options for selecting InfiniBand based network providers of IntelMPI 2019 as well as an overview of their performance characteristics. Intel MPI 2019 U7 is available in Azure HPC images and can be deployed through a variety of deployment vehicles (CycleCloud, Batch, ARM templates, etc).  AzureHPC scripts provide an easy way to quickly deploy an HPC cluster using these HPC VM images.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.