LLM based development tools: PromptFlow vs LangChain vs Semantic Kernel

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Community Hub.

Introduction

Data is the power behind human civilization and knowledge. Over the past decades, we have seen an increase in data shared across the internet. With the massive availability of data as well as advanced compute, models were trained to take advantage of the data to process and understand natural language. Models were trained to produce new and original content bringing about Generative AI.
 
Globally, developers, data scientists, and engineers created new applications or advanced their existing applications to take advantage of LLMs. While building a Question-and-Answer chatbot is simple and you may not need advanced tools, in other more complex scenarios, the AI orchestrator come in and make the process easier. At the center of LLM applications is the AI orchestration layer that allows developers to build their own Copilot experiences, and in this layer, developer tools come into play to simplify your development.
 

Prerequisites

What are they?

The tools, libraries and/or frameworks that ease the process of creating LLM application by streamlining any repetitive process through automation. Some of the AI orchestrators include:
  • Semantic Kernel: an open-source SDK that allows you to orchestrate your existing code and more with AI.
  • LangChain: a framework to build LLM-applications easily and gives you insights on how the application works
  • PromptFlow: this is a set of developer tools that helps you build an end-to-end LLM Applications. Using PromptFlow, you can take your application from an idea to production.

Semantic Kernel

Semantic Kernel is an SDK, that enables you to easily describe your LLM application capabilities using plugins that can be run by the kernel.
 
Semantic Kernel uses plugins allows developers to use semantic and native functions in their applications. In addition, it employs a kernel to manage sequences. Below are some components of semantic kernel:
  • Kernel: the kernel is at the center stage of your development process as it contains the plugins and services necessary for you to develop your AI application.
  • Planners: special prompts that allow an agent to generate a way to complete a task such as using function calling to complete a task.
  • Plugins: they allow you to give your copilot skills, using both code and prompts
  • Memories: in addition to connecting your application to LLMs and creating various tasks, Semantic Kernel has a memory feature to store context and embeddings giving additional information to your prompts.

Next, how do we put this into action and create our AI task translatorLet’s delve into a practical scenario. Imagine you want to develop a Language Tutor—an application specifically focused on Swahili. Users can quickly learn basic greetings and perform essential tasks in Swahili. How do you create your chatbot using Semantic Kernel

  1. Install the necessary libraries using: pip install semantic-kernel==0.9.8b1 openai
  2. Add you keys and endpoint from .env to your notebook
bethanyjep_0-1716399137583.png
3. Create a services.py file to be able to bring in your LLM into your application
""" This module defines an enumeration representing different services. """ from enum import Enum class Service(Enum): """ Attributes: OpenAI (str): Represents the OpenAI service. AzureOpenAI (str): Represents the Azure OpenAI service. HuggingFace (str): Represents the HuggingFace service. """ OpenAI = "openai" AzureOpenAI = "azureopenai" HuggingFace = "huggingface"

4. Create a new Kernel where you will host your application then import Service into your application which will allow you to add your LLM into our application.

# Import the Kernel class from the semantic_kernel module from semantic_kernel import Kernel from services import Service from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion # Create an instance of the Kernel class kernel = Kernel() # Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace) selectedService = Service.OpenAI # Define the service_id variable service_id = None # Set the deployment name, API key, and endpoint variables deployment = model api_key = api_key endpoint = azure_endpoint # Set the service_id variable to "default" service_id = "default" # Add an instance of the AzureChatCompletion class to the kernel's services kernel.add_service( AzureChatCompletion(service_id=service_id, deployment_name=deployment, endpoint=endpoint, api_key=api_key), )

5. Next we will create and add our plugin. We have the plugin folder TranslatePlugin within it we have our Swahili Plugin with our config and prompt txt files which guide the model on how it will perform its task. Once imported we invoke the Swahili Function into our application.

# Set the directory path where the plugins are located plugins_directory = ".\prompt_templates_samples" # Add the TranslatePlugin to the kernel and store the returned plugin functions in the translateFunctions variable translateFunctions = kernel.add_plugin(parent_directory=plugins_directory, plugin_name="TranslatePlugin") # Retrieve the Swahili translation function from the translateFunctions dictionary and store it in the swahiliFunction variable swahiliFunction = translateFunctions["Swahili"] # invokes the 'swahiliFunction' with the specified parameters and prints the results result = await kernel.invoke(swahiliFunction, question="what is the WiFi password", time_of_day="afternoon", style="professional") print(result)

6. The output will be the requested translation.

bethanyjep_1-1716399137585.png

 

LangChain

LangChain is a framework that simplifies the process of building LLM applications. The framework supports Python and JavaScript languages, and its learning curve is easily accessible for beginners. It boasts of a robust community ensuring consistent updates and comprehensive features. LangChain contains components that allow you to extend interfaces and add in external integrations to your applications. The main components include:
  • Model I/O: this is where you can bring in your LLM and format its inputs and outputs
  • Retrieval: In RAG applications, this component specifically helps you load your data, connect with vector databases and transform your documents to meet the needs of your application.
  • Other Higher level Components
    • Tools: allows you to create Intergrations with external services and applications
    • Agents: these are responsible as a guide on what step to take next.
    • Chains: these are a sequence of calls linking various components to create LLM apps

In LangChain, we will use chains to bind our prompts and the template together, here is how we implement the Swahili Tutor:
  1. Install the necessary libraries: pip install langchain openai
  2. Login to Azure CLI using az login --use-device-code and authenticate your connection
  3. Add you keys and endpoint from .env to your notebook, then set the environment variables for your API key and type for authentication.
import os from azure.identity import DefaultAzureCredential # Get the Azure Credential credential = DefaultAzureCredential() # Set the API type to `azure_ad` os.environ["OPENAI_API_TYPE"] = "azure_ad" # Set the API_KEY to the token from the Azure credential os.environ["OPENAI_API_KEY"] = credential.get_token("https://cognitiveservices.azure.com/.default").token

4. Create your model class and configure it to interact with Azure OpenAI

# Import the necessary modules from langchain_core.messages import HumanMessage from langchain_openai import AzureChatOpenAI model = AzureChatOpenAI( openai_api_version=AZURE_OPENAI_API_VERSION, azure_deployment=AZURE_OPENAI_CHAT_DEPLOYMENT_NAME )

5. Use ChatPromptTemplate to curate your prompt

# Import the necessary modules from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder # Create a ChatPromptTemplate object with messages prompt = ChatPromptTemplate.from_messages( [ ( "system", "You are a helpful assistant that translates tasks into Kiswahili. Follow these guidelines:\n" "The translation must be accurate and culturally appropriate.\n" "Use the {{$time_of_day}} to determine the appropriate greeting to use during translation.\n" "Be creative and accurate to communicate effectively.\n" "Incorporate the {{$style}} suggestion, if provided, to determine the tone for the translation.\n" "After translating, add an English translation of the task in the specified language.\n" "For example, if the question is 'what is the WiFi password', your response should be:\n" "'Habari ya mchana! Tafadhali nipe nenosiri la WiFi.' (Translation: Good afternoon! Please provide me with the WiFi password.)" ), ("human", "{question}"), ] )

6. Chain your model and prompt together to get a response

# Chain the prompt and the model together chain = prompt | model # Invoke the chain with the input parameters response = chain.invoke( { "question":"what is the WiFi password", "time_of_day":"afternoon", "style":"professional" } ) # Print the response response

7. The output will be the requested translation.

bethanyjep_8-1716428674150.png

 

PromptFlow

PromptFlow is a collection of tools you can get started with directly on Visual Studio Code, using the PromptFlow extension, or start from Azure AI Studio. Building with PromptFlow makes it easier to streamline the development cycle of LLM-applications. As is in its name, PromptFlow is a graphical flow connecting different components of your application all glued together.

PromptFlow additionally provides a visual representation of your application that links together your LLM, prompts and Python code that is customizable. Using PromptFlow, you can quickly and easily iterate your through your flows, add connections, debug, test and deploy to your platform of choice. For more dynamic adaptation scenarios, you can use Semantic Kernel or LangChain in your PromptFlow workflow. In addition to your own custom code, using PromptFlow, you can create connections to external services such as LLMs, Vector Databases and custom connections to add into your flow and authenticate your services.

Let's see how to do Swahili AI tutor implementation in PromptFlow:

  1. First, you install the promptflow extension on Visual Studio Code
bethanyjep_0-1716427319591.png

2. Next, ensure you install the necessary dependencies and libraries your will need for the project.

bethanyjep_1-1716427319592.png

3. In our case we will be build a chat flow with template. Click on somewhere and create a chat flow for the application

bethanyjep_2-1716427319594.png

4. Once the flow is ready, we can open flow.dag and click on the visual editor to see how our application is structured.

bethanyjep_3-1716427319596.png

5. We will need to connect to our LLM, you can do this by creating a new connection. Update your Azure OpenAI endpoint and your connection name. Click create connection then you will have your connection ready.

bethanyjep_4-1716427319599.png

6. Update the connection and run the flow to test your application.

7. Update the chat.jinja2 file to customize the prompt template.

bethanyjep_5-1716427319601.png

8. Edit the yaml file to add more functionality to your flow, in our case for the Tutor, we will add more inputs.

bethanyjep_6-1716427319602.png
9. Run the flow in interactive mode and see your AI Tutor come to life
bethanyjep_7-1716427319604.png

In Summary:

If you are getting started and building LLM applications at scale, you might need to use an AI Orchestrator to ease your development process. Choosing between LangChain, Semantic Kernel and PromptFlow, depends on your project scope, scale, flexibility level and your preferred programming language. You can get started with the tools using the following resources:

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.