Azure Machine Learning Integration with NVIDIA AI Enterprise

This post has been republished via RSS; it originally appeared at: Microsoft Tech Community - Latest Blogs - .

Microsoft has collaborated with NVIDIA to integrate NVIDIA AI Enterprise — the software layer of NVIDIA AI platform, which offers over 100 frameworks, pretrained models, and development tools — into Azure Machine Learning. The integration will create the first enterprise-ready, secure, end-to-end cloud platform for developers to build, deploy, and manage AI applications including custom large language models. It will feature NVIDIA AI toolkits — such as TAO, RAPIDS, MONAI, Triton Inference Server, and DeepStream — in the Azure Machine Learning Community registry, which will be available starting today as a private technical preview. With this integration, users will be able to leverage the power of NVIDIA’s enterprise-ready software, complementing Azure Machine Learning’s high-performance and secure infrastructure, to build production-ready AI workflows.  
Azure Machine Learning offers an experimentation and MLOps platform for training models powered by the scalability, reliability, and security of Azure. Azure Machine Learning offers cloud-scale compute powered by NVIDIA accelerated computing on demand for training and inference. Being an open platform, Azure Machine Learning supports all popular machine learning frameworks and toolkits, including those from NVIDIA AI. This collaboration optimizes the experience of running NVIDIA AI software by integrating it with the Azure Machine Learning training and inference platform. You no longer need to spend time setting up training environments, installing packages, writing training code, logging training metrics, and deploying models.  


You can create training jobs that use TAO, RAPIDS, MONAI Toolkit, and other software from the NVIDIA AI Enterprise suite with drag-and-drop experience using Designer in Azure Machine Learning studio. You can compose MLOps pipelines with ready-to-use components that provide a consistent interface for the toolkits. Lastly, you can deploy models with NVIDIA Triton Inference Server from all popular frameworks and optimize inference for multiple query types—including both real time and batch processing— without having to write any code. Running Triton Inference Server on Azure Machine Learning endpoints gives you safe rollout with robust high-performance inference and auto-scaling based on workload demand.  


NVIDIA AI is seamlessly integrated with Azure Machine Learning through the NVIDIA AI Enterprise registry. Azure Machine Learning registry is a platform for hosting and sharing building blocks of machine learning experiments such as containers, pipelines, models, and data. Users can share assets securely within an organization-specific registry, or across multiple organizations using a community registry. Currently available as a limited technical preview, the NVIDIA AI Enterprise registry is a community registry created and maintained by NVIDIA. Sign up to get access by filling out this form. The source code and usage samples for the assets hosted in the registry are published here.  
Want to see the NVIDIA AI Enterprise registry in action? Check out these sessions at Microsoft Build 2023 



Leave a Reply

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.