Skip to content

This sample has the full End2End process of creating RAG application with Prompt Flow and AI Studio. It includes GPT 3.5 Turbo LLM application code, evaluations, deployment automation with AZD CLI, GitHub actions for evaluation and deployment and intent mapping for multiple LLM task mapping.

License

Notifications You must be signed in to change notification settings

w-javed/contoso-chat

Β 
Β 

Repository files navigation

name description languages products page_type urlFragment
Contoso Chat Retail with Azure AI Studio and Promptflow
A retail copilot that answers customer queries with responses grounded in retailer's product and customer data.
python
bicep
azdeveloper
prompty
azure-openai
azure-cognitive-search
azure
azure-cosmos-db
sample
contoso-chat

Contoso Chat Retail with Azure AI Studio and Promptflow

This sample creates a customer support chat agent for an online retailer called Contoso Outdoors. The solution uses a retrieval-augmented generation pattern to ground responses in the company's product and customer data. Customers can ask questions about the retailer's product catalog, and also get recommendations based on their prior purchases.

Open in GitHub Codespaces Open in Dev Containers


About This Sample

In this sample we build, evaluate and deploy a customer support chat AI for Contoso Outdoors, a fictitious retailer who sells hiking and camping equipment. The implementation uses a Retrieval Augmented Generation (RAG) architecture to implement a retail copilot solution that responds to customer queries with answers grounded in the company's product catalog and customer purchase history.

The sample uses Azure AI Search to create and manage search indexes for product catalog data, Azure Cosmos DB to store and manage customer purchase history data, and Azure OpenAI to deploy and manage the core models required for our RAG-based architecture.

By exploring and deploying this sample, you will learn to:

Table of Contents

  1. Features
  2. Getting Started
  3. Azure Deployment
  4. Local Development
  5. Guidance
  6. Troubleshooting
  7. Resources
  8. Contributing
  9. Trademarks

Features

The project comes with:

  • Sample model configurations, chat and evaluation prompts for a RAG-based copilot app.
  • Prompty assets to simplify prompt creation & iteration for this copilot scenario.
  • Sample product and customer data for the retail copilot scenario.
  • Sample application code for copilot chat and evaluation workflows.
  • Sample azd-template configuration for managing the application on Azure.
  • Managed Identity configuration as a best practice for managing sensitive credentials.

This is also a signature sample for demonstrating new capabilities in the Azure AI platform. Expect regular updates to showcase cutting-edge features and best practices for generative AI development.

Architecture Diagram

The Contoso Chat application implements a retrieval augmented generation pattern to ground the model responses in your data. The architecture diagram below illustrates the key components and services used for implementation and highlights the use of Azure Managed Identity to reduce developer complexity in managing sensitive credentials.

Architecture Diagram

Demo Video

🌟 | Watch for a video update showing how easy it is to go from code to cloud using this template and the Azure Developer CLI for deploying your copilot application.

Versions

This has been the signature sample used to showcase end-to-end development of a copilot application code-first on the Azure AI platform. It has been actively used for training developer audiences and industry partners at key events including Microsoft AI Tour and Microsoft Build. Use the links below to reference specific versions of the sample corresponding to a related workshop or event session.

Version Description
v0 : #cc2e808 Microsoft AI Tour 2023-24 (dag-flow, jnja template) - Skillable Lab
v1 : msbuild-lab322 Microsoft Build 2024 (dag-flow, jnja template) - Skillable Lab
v2 : main Latest version (flex-flow, prompty asset)- Azure AI Template

Getting Started

Pre-Requisites

You will also need:

Setup Environment

You have three options for getting started with this template:

  • GitHub Codespaces - Cloud-hosted dev container (pre-built environment)
  • VS Code Dev Containers - Locally-hosted dev container (pre-built environment)
  • Manual Setup - Local environment setup (for advanced users)

We recommend using GitHub Codespaces for the fastest start with least effort. However, we have provided instructions for all three options below.

1. GitHub Codespaces

  1. Click the button to launch this repository in GitHub Codespaces.

    Open in GitHub Codespaces

  2. This opens a new browser tab with setup taking a few minutes to complete. Once ready, you should see a Visual Studio Code editor in your browser tab, with a terminal open.

  3. Sign into your Azure account from the VS Code terminal

    azd auth login --use-device-code

2. VS Code Dev Containers

This is a related option that opens the project in your local VS Code using the Dev Containers extension instead. This is a useful alternative if your GitHub Codespaces quota is low, or you need to work offline.

  1. Start Docker Desktop (install it if not already installed)

  2. Open the project by clickjing the button below:

    Open in Dev Containers

  3. Once ready, the tab will refresh to show a Visual Studio Code editor with a terminal open.

  4. Sign into your Azure account from the VS Code terminal

    azd auth login

3. Manual Setup (Local)

  • Verify you have Python3 installed on your machine.

  • Install dependencies with pip install -r requirements.txt

  • Install Azure Developer CLI

    • Windows: winget install microsoft.azd
    • Linux: curl -fsSL https://aka.ms/install-azd.sh | bash
    • MacOS: brew tap azure/azd && brew install azd
  • Sign into your Azure account from the VS Code terminal

    azd auth login

Azure Deployment

  1. Use the same terminal where you previously authenticated with Azure.

  2. Provision and deploy your application to Azure. You will need to specify a valid subscription, deployment location, and environment name.

    azd up
  3. This step will take some time to complete.

    • Visit the Azure Portal to monitor progress.
    • Look for a new resource group matching the environment name
    • Click Deployments to track the status of the provisioning process
  4. Once provisioning completes, monitor progress for app deployment.

    • Visit the Azure AI Studio
    • Look for an AI Project associated with the above resource group
    • Click Deployments to track the status of the application deployment
  5. Once deployment completes, test the deployed endpoint from Azure AI Studio

    • Click the newly-created chat-deployment-xx endpoint listed
    • In the details page, click the Test tab for a built-in testing sandbox
    • In the Input box, enter a new query in this format and submit it:
      {"question": "Tell me about hiking shoes", "customerId": "2", "chat_history": []}
      
    • If successful, the response will be printed in the area below this prompt.

You can find your deployed retail copilot's Endpoint and Primary Key information on the deployment details page in the last step. Use them to configure your preferred front-end application (e.g., web app) to support a customer support chat UI capability that interacts with the deployed copilot in real time.

Local Development

Exploring the Prompty Asset

This sample contains an example chat.prompty asset that you can explore, to understand this new capability. The file has the following components:

  1. A frontmatter section that defines the following attributes:
    • name of the application
    • description of the application functionality
    • authors of the application (one per line)
    • model description (with these parameters)
      • api type of endpoint (can be chat or completion)
      • configuration parameters including
        • type of connection (azure_openai or openai)
        • environment variables (e.g., azure_deployment for chat model)
      • parameters (max_tokens, temperature, response_format)
    • inputs - each with type and optional default value
    • outputs - specifying a type (e.g., string)
    • sample - an example of the inputs (e.g., for testing)
  2. A system context (defining the agent persona and behavior)
    • #Safety section enforcing responsible AI requirements
    • #Documentation section with template for filling product documentation
    • #Previous Orders section with template for filling relevant history
    • #Customer Context section with template for filling customer details
    • question section to embed user query
    • Instructions section to reference related product recommendations

This specific prompty takes 3 inputs: a customer object, a documentation object (that could be chat history) and a question string that represents the user query. You can now load, execute, and trace individual prompty assets for a more granular prompt engineering solution.

Testing the Application Flow

This sample uses a flex-flow feature that lets you "create LLM apps using a Python class or function as the entry point" - making it easier to test and run them using a code-first experience.

  • This sample implements a Function based flow
  • The entry point is the get_response functionin chat_request.py

You can now test the flow in different ways:

  • Run it directly, like any Python script
  • Convert it to a flow, then use pf flow test --flow ...
  • Start a UI to chat with the flow using pf flow test --flow ... --ui

🌟 | Watch this space for more testing guidance.

Guidance

Region Availability

This template uses gpt-35-turbo for chat completion, gpt-4 for chat evaluation and text-embedding-ada-002 for vectorization. These models may not be available in all Azure regions. Check for up-to-date region availability and select a region accordingly.

This template uses the Semantic Ranker feature of Azure AI Search which may be available only in certain regions. Check for up-to-date region availability and select a region accordingly.

  • We recommend using sweden-central for the OpenAI Models
  • We recommend using eastus for the Azure AI Search Resource

Note

The default azd deploy takes a single location for deploying all resources within the resource group for that application. We set the default Azure AI Search location to eastus (in infra/ configuration), allowing you to now use the default location setting to optimize for model availability and capacity in region.

Costs

Pricing for services may vary by region and usage and exact costs cannot be estimated. You can estimate the cost of this project's architecture with Azure's pricing calculator with these services:

  • Azure OpenAI - Standard tier, GPT-4, GPT-35-turbo and Ada models. See Pricing
  • Azure AI Search - Basic tier, Semantic Ranker enabled See Pricing
  • Azure Cosmos DB for NoSQL - Serverless, Free Tier See Pricing

Security

This template uses Managed Identity for authentication with key Azure services including Azure OpenAI, Azure AI Search, and Azure Cosmos DB. Applications can use managed identities to obtain Microsoft Entra tokens without having to manage any credentials. This also removes the need for developers to manage these credentials themselves and reduces their complexity.

Additionally, we have added a GitHub Action tool that scans the infrastructure-as-code files and generates a report containing any detected issues. To ensure best practices we recommend anyone creating solutions based on our templates ensure that the Github secret scanning setting is enabled in your repo.

Resources

Troubleshooting

Have questions or issues to report? Please open a new issue after first verifying that the same question or issue has not already been reported. In the latter case, please add any additional comments you may have, to the existing issue.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

About

This sample has the full End2End process of creating RAG application with Prompt Flow and AI Studio. It includes GPT 3.5 Turbo LLM application code, evaluations, deployment automation with AZD CLI, GitHub actions for evaluation and deployment and intent mapping for multiple LLM task mapping.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 48.8%
  • Bicep 45.7%
  • Python 2.4%
  • Shell 2.1%
  • Other 1.0%