Skip to content

Latest commit

 

History

History
27 lines (17 loc) · 632 Bytes

README.md

File metadata and controls

27 lines (17 loc) · 632 Bytes

Cerbos RAG Demo

Requirements

Setup

  • Grab the embedding model: ollama pull mxbai-embed-large
  • Grab the LLM: ollama pull llama3.1

Startup

docker compose -f docker-compose.chroma.yaml up

In the console, the app logs will tell you port the application is running on, typically http://localhost:3000/

The application is broken into a number of sections which demonstrate the data store, the embedding process, the vector store and then a full RAG chatbot.

Supported Vector Stores

  • Chroma
  • (WIP) Mongo Atlas
  • (WIP) Pinecone