More Performance + AI Integration | Azure Database for MySQL - Flexible Server

This post has been republished via RSS; it originally appeared at: Microsoft Tech Community - Latest Blogs - .

Bring your MySQL workloads to run on Azure. Azure Database for MySQL — Flexible Server offers a powerful, fully managed solution for MySQL workloads, providing unique platform-level optimizations that significantly enhance connection scaling and cost performance. Ensure high efficiency and reduced latency for mission-critical applications for up to twice the performance of other offerings. The integration with Azure OpenAI Service and Azure AI Search further extends its capabilities, enabling intelligent vector-based search and generative AI responses for more accurate and relevant user queries.

 

Join Parikshit Savjani, Azure Database for MySQL Principal Group PM, shares how Azure Database for MySQL — Flexible Server transforms traditional MySQL applications into high-performance, intelligent systems by combining the scalability and security of Azure with advanced AI-driven insights.

 

Main.png

 

Run MySQL on Azure.

 

1.png

A fully managed service with unique optimizations — doubling performance, and enabling AI-driven search and responses. See it here.

 

 

Redirect transaction logging to high-speed SSDs.

 

2.png

Higher throughput, reduced latency, & higher cost efficiency during traffic spikes. Check it out.

 

 

Enable semantic search in Azure Database for MySQL.

 

3.png

Relevant and intelligent search results with Azure AI Search and OpenAI services. Take a look.

 

 

Watch our video here:

 

 


QUICK LINKS:

00:00 — Run MySQL on Azure
00:56 — Run your LAMP stack on Azure
01:24 — Accelerated Logs capability
03:02 — Vector-based search and generative AI
04:02 — Leverage Azure AI Search & Azure OpenAI services
04:29 — Establish database connection
05:52 — Build vector index
07:04 — Create an Indexer
07:32 — Semantic Search
07:54 — Test it in the Azure AI playground
08:49 — Wrap up

 

 

Link References

Check out https://aka.ms/mysql-resources

Join the community for Azure Database for MySQL at https://aka.ms/mysql-contributors

 

 

Unfamiliar with Microsoft Mechanics?

As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.

 

 

Keep getting this insider knowledge, join us on social:


Video Transcript:

-If MySQL is your app data tier of choice, you will want to run it on Azure. In the next few minutes, I’ll show you how Azure Database for MySQL Flexible Server, as a fully-managed service, builds on top of the community-driven MySQL to provide unique platform-level optimizations where, for example, using the new Accelerated Logs capability, you can significantly improve connection scaling and cost performance for your web apps, with up to twice performance compared to other offerings. 

 

-Or you can combine Azure Database for MySQL with Azure AI Search and Azure Open AI Service to light up intelligent vector search for natural language querying and generative AI responses, with your existing web and e-commerce apps, all while operating on an enterprise-grade foundation of advanced platform-level security and resiliency across Azure’s global network backbone, and taking advantage of built-in AI-powered technical guidance with Copilot in Azure. Microsoft, in fact, is a major open-source contributor, and as an open platform, you can run your entire LAMP stack on Azure. 

 

-This is literally pure MySQL on Azure, with community updates implemented within months of being available for a high level of compatibility and extension support, with a few added advantages, where we give you platform-level optimizations for your workloads running on Azure. 

 

-For example, here I’ve deployed the LAMP base Magento framework for my e-commerce app, and with our new Accelerated Logs capability in the Business Critical Service Tier of Azure Database for MySQL Flexible Server, I’m going to show you how we built on top of MySQL’s robust scale-out architecture to improve cost performance with reduced latency for disk writes. 

 

-This e-commerce app is prone to bursts of traffic whenever there is a specific promotion or event, and so I’m going to use the sysbench tool for around 300 seconds, 128 threads, 20 tables, and 20 million records per table to simulate high throughput traffic to the database and measure the number of transactions and queries per second. 

 

-I’ll first run this test without Accelerated Log enabled, and you’ll see that measured throughput is around 4,500 transactions per second, 90,000 queries per second, and latency is around 51 milliseconds at the 95th percentile. 

 

-Let’s now go back to the Azure portal to enable Accelerated Logs. And in sysbench, we’ll run the exact same test again and let the transactions process. 

 

-Once it’s completed, you’ll see that we have doubled the throughput with 9,500 transactions per second versus 4,500 before, and 190,000 queries per second versus 90,000 before. And the response time is reduced by more than half with latency measured at 22 milliseconds at the 95th percentile versus 51 milliseconds before. 

 

-We are able to achieve this through Azure’s platform-level optimizations, where under the covers, we keep regular data file IO operations where they are, but redirect IO-intensive transaction logging to higher speed SSDs for lower latency and higher performance. 

 

-This ensures better responsiveness, higher throughput, and cost efficiency for your mission-critical workloads. Next, let’s make the same workload more intelligent with vector-based search and generative AI responses. This is the front end of my e-commerce app. Its search is keyword-based and that has limitations. You can see here I’m searching for a rain jacket for women, and because it’s searching on jacket as a keyword, not only is it returning the result for women’s jacket, but also for men, and not all jackets are necessarily rain jackets. 

 

-Now, let me show you the same thing using semantic search with the responses returned using a custom generative AI experience. I’ll use natural language to type, “We live in Seattle and I’m looking for a rain jacket for my wife which is highly rated and recommended by other customers.” 

 

-As you can see, the Magento Product Recommender Chat was able to interpret my requirements in natural language, perform semantic search from the product and reviews data stored in our backend MySQL database, and summarize the review results back to me to recommend the Inez Full-Zip Jacket. 

 

-Let’s see what is happening on the backend. To support semantic search for our e-commerce app, we leverage Azure AI Search and Azure OpenAI services. Azure AI Search pulls the product and reviews data from the backend MySQL database for Magento, using an indexer that runs periodically. The reviews data is further chunked and vectorized using Azure OpenAI’s text embedding model. In Azure AI Search, the vectorized data is then persisted in a vector search index. In fact, let me show you how we implement this architecture in code. 

 

-Here we use Python code in Jupyter Notebook to define, build, and refresh vector search index in Azure AI Search. And the first thing we need to do is establish our database connection. To do that, I’ve defined the connection strength for Azure Database for MySQL Flexible Server hosting Luma’s Magento e-commerce web database. 

 

-Next, we need to connect to our Azure AI services that we’ll use to generate embeddings. Here I’ve entered the Azure OpenAI deployment details, including the API base, API key, and API type, which in this case is Azure. I’m using OpenAI’s text embedding ada-002 as the embedding model, with embedding size of 1,536 dimension, and the GPT-4 large language model. Now I have the connections in place to start building a vector index. 

 

-I’ll use the service endpoint for Azure AI Search. Here I’ve defined the search key, the service endpoint details, and my index name that I’ll use later. Now, I need to define the query that the AI search will use to populate the vector index. In this case, because the products and customer reviews data are stored in separate tables in the Magento MySQL database, I’ve defined a single-view abstraction name, product_review_data_all in the database. This joins the product and reviews table to pull all the products information with their respective reviews. 

 

-Now, I can complete the configuration of the vector index that I named earlier. Because customer review text strings can be quite long and span multiple themes, we need to break those into smaller pieces or chunks so that each main theme in any given review gets its own unique vector embedding. I’ve done that here, and each chunk will get a new row. This is represented by a unique review ID field. Below that, I’ve added seven additional parent fields to each new row to provide more context. We’ll define the process that splits the review text into chunks in a moment. 

 

-For the vector index itself, I’ve used Hierarchical Navigable Small World, or HNSW graphs, which is one of the most popular algorithms for similarity search. I’ve defined the HNSW parameters here, and I’ve defined the connection settings for the VectorSearchProfile and vectorizers that I’ll use in a moment. 

 

-Now, for the vector index, I’ve defined the search types. Here I’m using vector search as well as semantic search, which together produce what’s called hybrid search. It means you can search using keywords or natural language descriptions. 

 

-So I’ll go ahead and run it. Once it completes, you’ll see our Magento review index is now created. Next, we create an indexer that will pull the product reviews data from our MySQL database, split those into chunks, generate the vectors for each chunk, and write those to the search index. 

 

-Here we are using two pre-built skillsets. the Split Skill generates chunks from each review. Azure OpenAI’s Embedding Skill then uses those chunks to generate the vector embedding values for the index. Now we can run everything to insert the vector data into our vector index. Our Azure AI Search service is now ready for semantic search queries. In fact, I can try this out from the notebook. 

 

-I’ll execute a semantic search query, “suggest me some rain jackets for women,” against our vector search index, and you’ll see the same results that we saw before in our web app. The Inez Full-Zip Jacket comes with the highest semantic search score and is our top result. And with our index running, we can also try this out in the Azure AI studio playground. To do that, I’ll just need to connect it to my search index. From setup, in the Add Your Data tab, I’ll add my data source. In this case, it’s the Azure AI Search type we just created. Then I need to select my index. I’ll enable the vector search option and then use the ada-002 embedding model. 

 

-I’ll keep the search type as hybrid plus semantic, and that’s it. From here, we are ready to start configuring the GenAI experience in the chat playground. Using the system message, I can instruct the OpenAI GPT-4 model to behave as a product recommender assistant. Now, to test it out, I’ll paste in the same query from before and run it. Then as it completes, you’ll see the Inez Full-Zip Jacket shows up as the top recommended rain jacket, along with a brief summary of the reviews. Now with everything tested, we are ready to run it in production, like you saw on our website earlier. 

 

-So that was an overview of the unique advantages of bringing your MySQL workloads on Azure, from its performance optimizations, to how you can extend it with Azure AI services. To learn more, please check out aka.ms/mysql-resources. And you can join the community for Azure Database for MySQL at aka.ms/mysql-contributors. And keep watching Microsoft Mechanics for the latest tech updates. Thanks for watching.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.