Skip to content

Commit

Permalink
docs: improves docs
Browse files Browse the repository at this point in the history
  • Loading branch information
micheleriva committed Oct 15, 2024
1 parent cab2e62 commit 834725b
Show file tree
Hide file tree
Showing 2 changed files with 132 additions and 0 deletions.
10 changes: 10 additions & 0 deletions packages/docs/config/menu/open-source.ts
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,16 @@ const openSourceMenu = [
}
]
},
{
label: 'Answer Engine',
collapsed: false,
items: [
{
label: 'Orama Answer Engine',
link: '/open-source/usage/answer-engine/introduction'
}
]
},
{
label: 'Supported languages',
collapsed: true,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
---
title: Orama as an Answer Engine
description: Learn how to use Orama as an answer engine to perform ChatGP-like experiences on your website.
---

With Orama 3.0, we introduced a new feature called **AnswerSession** that allows you to perform ChatGP-like experiences on your website.

It uses a free feature from Orama Cloud called [**Secure Proxy**](https://orama.com/blog/announcing-the-orama-secure-ai-proxy) to proxy your queries to the OpenAI API, so you don't need to worry about sharing the API key on the client-side.

The APIs are designed to be as close as possible to the **Orama Cloud** APIs, so you can easily migrate your projects from **Orama Cloud** to **Orama Open Source** and vice-versa.

## Getting Started

Orama implements a full-text, vector, and hybrid search engine as well as a complete **RAG** (Retrieval-Augmented Generation) pipeline to generate answers from your documents. All with minimum dependencies and configuration, so you can focus on building your project.

To get started, you will need to create an account on [Orama Cloud](https://cloud.orama.com) and generate an API key from the **"Secure Proxy"** section. Then, you can use the API key to create an **AnswerSession** and start generating answers.

Follow [this guide](/cloud/orama-ai/orama-secure-proxy) to get your API key for free and start using the **AnswerSession** APIs!

## AnswerSession

Creating an answer session is as simple as:

```js copy
import { create, insert } from "@orama/orama";
import { pluginSecureProxy } from "@orama/plugin-secure-proxy";

const secureProxy = await pluginSecureProxy({
apiKey: "my-api-key",
defaultProperty: "embeddings",
models: {
embeddings: "openai/text-embedding-3-small",
chat: "openai/gpt-4o-mini"
}
})

const db = await create({
schema: {
name: 'string'
} as const,
plugins: [secureProxy]
})

await insert(db, { name: "John Doe" })
await insert(db, { name: "Michele Riva" })

const session = new AnswerSession(db, {
// Customize the prompt for the system
systemPrompt: 'You will get a name as context, please provide a greeting message',
events: {
onStateChange: console.log
}
})

const response = await session.ask({
term: 'john',
})

console.log(response) // Hello, John Doe! How are you doing?
```

The `onStateChange: console.log` event will log the state of the session, allowing you to reactively update your UI based on the current state of the session.

In the example above, the `onStateChange` will be triggered for every new object in the following array (the `state`):

```js
[
// As soon as you call the `.ask` method, the state will be populated as follows:
{
interactionId: "cm2anntif000008l84lvqfrvc", // Unique interaction ID for the session
aborted: false, // If the session was aborted
loading: true, // If the session is loading
query: "john", // The query that was sent to the API
response: "", // The response from the API, which is empty until the API responds
sources: null, // The sources used to generate the response
error: false, // If there was an error
errorMessage: null, // The error message, if any
},
// Then, Orama will perform search and push the sources to the state:
{
interactionId: "cm2anntif000008l84lvqfrvc",
aborted: false,
loading: false,
query: "john",
response: "",
sources: { // The sources used to generate the response, in the same format as the search result from Orama
count: 1,
elapsed: { raw: 0.123, formatted: "100μs" },
hits: [
{
id: "1-19238",
score: 0.8,
document: { name: "John Doe" }
}
]
},
error: false,
errorMessage: null,
},
// Then, Orama will update this message with incoming chunks from OpenAI (via the secure proxy):
{
interactionId: "cm2anntif000008l84lvqfrvc",
aborted: false,
loading: false,
query: "john",
response: "Hello, John Doe!",
sources: {
count: 1,
elapsed: { raw: 0.123, formatted: "100μs" },
hits: [
{
id: "1-19238",
score: 0.8,
document: { name: "John Doe" }
}
]
},
error: false,
errorMessage: null,
}
]
```

0 comments on commit 834725b

Please sign in to comment.