IntelMPI 2019 on Azure HPC Clusters

This post has been republished via RSS; it originally appeared at: Azure Compute articles.

Intel MPI Library is a high-performance interconnect-independent multi-fabric library implementation of the industry-standard Message Passing Interface, v3.1 (MPI-3.1).

 

Intel MPI uses OFI Libfabric as the communication runtime from 2019 release onwards. Libfabric provides two different network providers for InfiniBand - "verbs" provider and "mlx" provider. The verbs provider is implemented over InfiniBand verbs (ibverbs) interfaces whereas the mlx provider is implemented over OpenUCX. The network provider can be selected at runtime using the environment provider FI_PROVIDER.

 

To select verbs provider:

 

FI_PROVIDER=verbs

 

 

To select mlx provider:

 

FI_PROVIDER=mlx I_MPI_OFI_EXPERIMENTAL=1

 

 

 

Performance Expectations:

 

The following figures depict the point-to-point MPI performance using IntelMPI 2019, using verbs and mlx providers. These were taken using OSU MicroBenchmarks on two Azure HBv2 VM instances running CentOS HPC 8.1 VM image. Intel MPI version used is Intel MPI 2019 Update 7. The two host nodes are connected to the same leaf InfiniBand switch.

 

osu_latency-small-impi.PNG

 

 

osu_bw-impi.PNG

 

 

osu_bibw-impi.PNG

 

 

Conclusion:

 

This blog lists configuration options for selecting InfiniBand based network providers of IntelMPI 2019 as well as an overview of their performance characteristics. Intel MPI 2019 U7 is available in Azure HPC images and can be deployed through a variety of deployment vehicles (CycleCloud, Batch, ARM templates, etc).  AzureHPC scripts provide an easy way to quickly deploy an HPC cluster using these HPC VM images.

REMEMBER: these articles are REPUBLISHED. Your best bet to get a reply is to follow the link at the top of the post to the ORIGINAL post! BUT you're more than welcome to start discussions here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.