Skip to content

Latest commit

 

History

History
60 lines (45 loc) · 3.96 KB

README.md

File metadata and controls

60 lines (45 loc) · 3.96 KB

What The Hack - Azure OpenAI Fundamentals

Introduction

The AI Fluency Event is an introduction to understanding the conceptual foundations of Azure OpenAI models. Materials from this hack can serve as a foundation for building your own solution with Azure OpenAI. This initiative is based on a GPS FY24 OKR to increase Partner Solution Architects' capability, capacity, and confidence to lead AI-related partner engagements.

This hack consists of four challenges. It will be hosted as a 2 day event and is a team based activity where students work in group of 3-5 people to solve the challenges. Whether you have limited to no experience with Machine Learning or have experimented with OpenAI before but want a deeper understanding of how to implement an AI solution, this hack is for you.

Learning Objectives

This hack is for anyone who wants to gain hands-on experience experimenting with prompt engineering and machine learning best practices, and apply them to generate effective responses from Azure OpenAI models.

Participants will learn how to:

  • Compare Azure OpenAI models and choose the best one for a scenario
  • Use prompt engineering techniques on complex tasks
  • Manage large amounts of data within token limits, including the use of chunking and chaining techniques
  • Grounding models to avoid hallucinations or false information
  • Implement embeddings using search retrieval techniques

Evaluate models for truthfulness and monitor for PII detection in model interactions

Challenges

  • Challenge 00: Prerequisites - Ready, Set, GO!
    • Prepare your workstation to work with Azure.
  • Challenge 01: Prompt Engineering
    • What's possible through Prompt Engineering
    • Best practices when using OpenAI text and chat models
  • Challenge 03: Grounding, Chunking, and Embedding
    • Why is grounding important and how can you ground a Large Language Model (LLM)?
    • What is a token limit? How can you deal with token limits? What are techniques of chunking?
  • Challenge 04: Retrieval Augmented Generation (RAG)
    • How do we create ChatGPT-like experiences on Enterprise data? In other words, how do we "ground" powerful LLMs to primarily our own data?
  • Challenge 05: Responsible AI
    • What are services and tools to identify and evaluate harms and data leakage in LLMs?
    • What are ways to evaluate truthfulness and reduce hallucinations? What are methods to evaluate a model if you don't have a ground truth dataset for comparison?

Prerequisites

Contributors