Continuing to Advance State of the Art Model and Tooling Support in Azure AI Studio

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Community Hub.



In the dynamic world of generative AI, innovation is the driving force propelling us into novel and uncharted territories. New models, new tools and platforms, and new use cases emerge every day, creating a remarkable fusion of creativity and technology and redefining the boundaries of what’s possible. With Azure AI, our goals are to provide the most cutting-edge open and frontier models in the industry, to ensure developers have model choice, to continue to uphold the highest standards in Responsible AI, and to continue to build superior tooling that brings all this together to accelerate the innovation in copilots that we’re seeing.


At Microsoft Ignite, we made over 25 announcements across the Azure AI stack, including the addition of 40 new models to the Azure AI model catalog; new multimodal capabilities in Azure OpenAI Service; the Models as a Service (MaaS) platform in Azure AI Studio and partnerships with Mistral AI, G24, Cohere, and Meta to offer their models in MaaS; and the public preview of Azure AI Studio.


Since Ignite, we’ve continued to add to our Azure AI portfolio. Today, we are excited to announce even more Azure AI capabilities: the availability of Meta’s Llama 2 running in Models as a Service, the preview of GPT-4 Turbo with Vision to accelerate generative AI and multimodal application development, and the addition of even more models in the Azure AI model catalog including our Phi 2 Small Language Model (SLM), among other things.


Now Available: Models as a Service for Llama 2 



In Azure AI, you have been able to deploy models onto your own infrastructure for a long time – simply go into the model catalog, select the model to deploy and a VM to deploy it on and you’re off to the races. But not every customer wants to think about operating infrastructure, which is why at Ignite we introduced Models as a Service, which operates models as API endpoints that you simply call, much the way you might call the Azure OpenAI Service.


Today, we’re making Meta’s Llama 2 available in Models as a Service through Azure AI in public preview, enabling Llama-2-7b (Text Generation), Llama-2-7b-Chat (Chat Completion), Llama-2-13b (Text Generation), Llama-2-13b-Chat (Chat Completion), Llama-2-70b (Text Generation), and Llama-2-70b-Chat (Chat Completion).


Watch this video to learn more about Models as Service. 


As we bring more models online in Models as a Service, we’ll keep you updated.  


Now Available: GPT-4 Turbo with Vision

We are delighted to announce that GPT-4 Turbo with Vision is now in public preview in Azure OpenAI Service and in Azure AI Studio. GPT-4 Turbo with Vision is a large multimodal model (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. It incorporates both natural language processing and visual understanding. This integration allows Azure users to benefit from Azure's reliable cloud infrastructure and OpenAI's advanced AI research.


GPT-4 Turbo with Vision in Azure AI offers cutting-edge AI capabilities along with enterprise-grade security and responsible AI governance. When combined with other Azure AI services, it can also add features like video prompting, object grounding, and enhanced optical character recognition (OCR). Customers like WPP and Instacart are using GPT-4 Turbo with Vision and Azure AI Vision today, check out this blog to hear more of their stories.


Available Tomorrow: Fine Tuning for GPT 3.5 Turbo and Other Models

In October 2023, we announced public preview of fine-tuning capabilities for OpenAI models. Starting tomorrow, December 15, 2023, fine-tuning will be generally available for models including Babbage-002, Davinci-002, GPT-35-Turbo. Developers and data scientists can now customize these Azure OpenAI Service models for specific tasks. We continue to push innovation boundaries with these new capabilities and are excited to see what developers build next with generative AI.


Expansion to the Azure AI Model Catalog

While Azure operates our own models as part of the Azure AI services like our Speech, Vision, and Language models, as well as Azure OpenAI, we also realize that customers often need models that we do not operate. Increasingly, we’re seeing customers look to deploy models that have been fine-tuned to specific tasks. To this end, we’ve operated a full model catalog in Azure AI Studio for a long time, and it is well-stocked with a broad variety of models. Today, we’re announcing the addition of six new models. Phi-2 and Orca 2 are available now and other models below are coming soon. 


Phi-2. is a small language model (SLM) from Microsoft with 2.7 billion parameters. Phi-2 shows the power of SLMs, and exhibits dramatic improvements in reasoning capabilities and safety measures compared to Phi-1-5, while maintaining its relatively small size compared to other transformers in the industry. With the right fine-tuning and customization, these SLMs are incredibly powerful tools for applications both on the cloud and on the edge.  Learn more.


DeciLM. Introducing DeciLM-7B, a decoder-only text generation model with an impressive 7.04 billion parameters, licensed under Apache 2.0. Not only is DeciLM-7B the most accurate 7B base model to date, but it also surpasses several models in its class.


DeciDiffussion. DeciDiffusion 1.0 is a diffusion-based text-to-image generation model. While it maintains foundational architecture elements from Stable Diffusion, such as the Variational Autoencoder (VAE) and CLIP's pre-trained Text Encoder, DeciDiffusion introduces significant enhancements. The primary innovation is the substitution of U-Net with the more efficient U-Net-NAS, a design pioneered by Deci. This novel component streamlines the model by reducing the number of parameters, leading to superior computational efficiency.


DeciCoder. 1B is a 1 billion parameter decoder-only code completion model trained on the Python, Java, and JavaScript subsets of Starcoder Training Dataset. The model uses Grouped Query Attention and has a context window of 2048 tokens. It was trained using a Fill-in-the-Middle training objective. The model's architecture was generated by Deci's proprietary Neural Architecture Search-based technology, AutoNAC.


Orca 2. Like Phi-2, Orca 2 from Microsoft explores the capabilities of smaller LMs (on the order of 10 billion parameters or less). With Orca 2, shows that improved training signals and methods can empower smaller language models to achieve enhanced reasoning abilities, which are typically found only in much larger language models. Orca 2 significantly surpasses models of similar size (including the original Orca model) and attains performance levels similar to or better than models 5-10 times larger, as assessed on complex tasks that test advanced reasoning abilities in zero-shot settings. Learn more.


Mixtral 8x7b. Mixtral has a similar architecture as Mistral 7B but is comprised of 8 expert models in one from a technique called Mixture of Experts (MoE). Mixtral decodes at the speed of a 12B parameter-dense model even though it contains 4x the number of effective parameters.

For more information on other models launched at Ignite in our model catalog, visit here.


Azure AI Provides Powerful Tools for Model Evaluation and Benchmarking

It’s not enough to have a lot of models, customers need to be able to choose which model meets their needs. To that end, Azure AI Studio provides a model benchmarking and evaluation subsystem, which is an invaluable tool for users to review and compare the performance of various AI models. The platform provides quality metrics for Azure OpenAI Service models and Llama 2 models such as Llama-2-7b, gpt-4, gpt-4-32k, and gpt-35-turbo. The metrics published in the model benchmarks help simplify the model selection process and enable users to make more confident choices when selecting a model for their task.


Previously, evaluating model quality could require significant time and resources. With the prebuilt metrics in model benchmarks, users can quickly identify the most suitable model for their project, reducing development time and minimizing infrastructure costs. In Azure AI Studio, users can access benchmark comparisons within the same environment where they build, train, and deploy their AI solutions. This enhances workflow efficiency and collaboration among team members. 


Learn more about Model benchmarks here.


Empowering Customers Around the Globe


These groundbreaking advancements not only amplify our capacity to generate diverse and imaginative content but also signal a shift in how we conceptualize AI’s potential. In fact, leading global law firm Dentons, is working with Azure AI to implement Azure OpenAI Service models including and Meta’s Llama 2 into its generative AI application called “fleetAI.” Dentons has over 750 lawyers and business services professionals and is utilizing Azure AI models internally to summarize legal contracts and extract key parts from documents resulting in significant time savings.


“Through the incorporation of a lease report generator, into our fleetAI system, developed with Microsoft Azure's Open AI service, we have revolutionized a time-consuming task that previously took 4 hours, reducing it to just 5 minutes,” said Sam Chen, Legal AI Adoption Manager for Dentons (UKIME). “This significant time saving enables our legal professionals to concentrate on more strategic tasks, thereby enhancing client service and underscoring our dedication to innovation.”


Our Commitment to Inclusive and Responsible AI Development for All


Responsible AI is a key pillar of AI innovation at Microsoft. In October 2023, we announced general availability of Azure AI Content Safety and at Microsoft Ignite 2023, enabled new capabilities to address harms and security risks that are introduced by large language models. The new features help identify and prevent attempted unauthorized modifications and identify when large language models generate material that leverages third-party intellectual property and content. With these capabilities, developers now have tools they can integrate as part of their generative AI applications to monitor content, minimize harm, and lower security risks.


The IDC MarketScape recently looked at AI governance platforms that ensure AI/ML lifecycle governance, collaborative risk management, and regulatory excellence for AI across five key principles: fairness, explainability, adversarial robustness, lineage, and transparency. We are excited to share that Microsoft has been recognized as a leader in the inaugural IDC MarketScape Worldwide  AI Governance Platforms 2023 Vendor Assessment. Read our blog to learn more about our placement and how customers are leveraging Azure AI to build and scale generative AI solutions responsibly.


One More Thing: Dark Mode in AI Studio


The user experience in Azure AI Studio matters a lot, and we are creating a more accessible AI ecosystem collaborating with AI developers with disabilities. Today, we’re pleased to announce, “dark mode,” a beloved feature of developers everywhere. Azure AI Studio's dark mode is not only visually appealing - but it also plays a crucial role in enhancing accessibility, making Azure AI Studio more inclusive and comfortable to use for everyone. We hope you get some rest for those eyes and enjoy this new feature as much as we do. To turn on dark mode, go to “Settings” in the app header to easily switch between light and dark themes.




Let’s shape the future of AI together


It has been an exciting year in the world of AI. There is a profound shift underway in the way we interact with applications, search for information, and get help with routine tasks. Copilots or assistants are transforming the way we learn, work, and communicate. We are excited to be at the forefront of this AI evolution to empower developers and data scientists to build with AI confidently for now and in the future.


































Leave a Reply

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.