-
Notifications
You must be signed in to change notification settings - Fork 209
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat(docs): new docs with mintlify (#153)
* feat(docs): new docs with mintlify * add docs to .prettierignore
- Loading branch information
1 parent
2a20dd9
commit c04237f
Showing
29 changed files
with
607 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,34 @@ | ||
# Mintlify Starter Kit | ||
|
||
Click on `Use this template` to copy the Mintlify starter kit. The starter kit contains examples including | ||
|
||
- Guide pages | ||
- Navigation | ||
- Customizations | ||
- API Reference pages | ||
- Use of popular components | ||
|
||
### ๐ฉโ๐ป Development | ||
|
||
Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command | ||
|
||
``` | ||
npm i -g mintlify | ||
``` | ||
|
||
Run the following command at the root of your documentation (where mint.json is) | ||
|
||
``` | ||
mintlify dev | ||
``` | ||
|
||
### ๐ Publishing Changes | ||
|
||
Changes will be deployed to production automatically after pushing to the default branch. | ||
|
||
You can also preview changes using PRs, which generates a preview link of the docs. | ||
|
||
#### Troubleshooting | ||
|
||
- Mintlify dev isn't running - Run `mintlify install` it'll re-install dependencies. | ||
- Page loads as a 404 - Make sure you are running in a folder with `mint.json` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
## My Snippet | ||
|
||
<Info>This is an example of a reusable snippet</Info> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,150 @@ | ||
--- | ||
title: "OpenAI" | ||
description: "Learn how to use OpenAI with Pezzo." | ||
--- | ||
|
||
## Using OpenAI With Pezzo | ||
|
||
Ensure that you have the latest version of the Pezzo Client installed, as well as the OpenAI NPM package. | ||
|
||
<CodeGroup> | ||
```bash npm | ||
npm i @pezzo/client openai | ||
``` | ||
```bash yarn | ||
yarn add @pezzo/client openai | ||
``` | ||
```bash pnpm | ||
pnpm add @pezzo/client openai | ||
``` | ||
</CodeGroup> | ||
|
||
### Initialize Pezzo and PezzoOpenAIApi | ||
|
||
```ts | ||
import { Pezzo, PezzoOpenAIApi } from "@pezzo/client"; | ||
import { Configuration } from "openai"; | ||
// Initialize the Pezzo client | ||
export const pezzo = new Pezzo({ | ||
apiKey: "<Your Pezzo API key>", | ||
projectId: "<Your Pezzo project ID>", | ||
environment: "Production", | ||
}); | ||
// Initialize OpenAI | ||
const configuration = new Configuration({ | ||
apiKey: "<Your OpenAI API key>", | ||
}); | ||
// Initialize the Pezzo OpenAI API | ||
export const openai = new PezzoOpenAIApi(pezzo, configuration); | ||
``` | ||
<Tip> | ||
The `PezzoOpenAIApi` class extends the native `OpenAIApi` class. This ensures a seamless experience, and allows you to use all OpenAI features. | ||
</Tip> | ||
You are now ready to interact with the OpenAI API through Pezzo, as you normally would. Let's give it a try! | ||
### Making Requests to OpenAI | ||
#### Option 1: With Prompt Management (Recommended) | ||
We recommend you to manage your AI prompts through Pezzo. This allows you to easily manage your prompts, and keep track of your AI requests. [Click here to learn about Prompt Management in Pezzo](platform/prompt-management). | ||
Below is an example of how you can use Pezzo to retrieve a prompt, and then use it to make a request to OpenAI. | ||
```ts | ||
const prompt = await pezzo.getPrompt("GenerateTasks"); | ||
// Provide the prompt as-is to OpenAI | ||
const response = await openai.createChatCompletion(prompt); | ||
// Or you can override specific properties if you wish | ||
const response = await openai.createChatCompletion({ | ||
...prompt, | ||
model: "gpt-4", | ||
}); | ||
``` | ||
Congratulations! You've benefitted from seamless prompt version management and request tracking. Your request will now be visible in the **Requests** page of your Pezzo project. | ||
#### Option 2: Without Prompt Management | ||
If you don't want to manage your prompts through Pezzo, you can still use Pezzo to make requests to OpenAI and benefit from Pezzo's [Observability features](features/observability/overview). | ||
You will consume the make request to the OpenAI exactly as you normally would. The only difference is that you will use the `PezzoOpenAIApi` instance we created above. Here is an example: | ||
```ts | ||
const response = await openai.createChatCompletion({ | ||
model: "gpt-3.5-turbo", | ||
temperature: 0, | ||
messages: [ | ||
{ | ||
role: "user", | ||
content: "Hey, how are you doing?", | ||
}, | ||
], | ||
}); | ||
``` | ||
You should now be able to see your request in the **Requests** page of your Pezzo project. | ||
### Additional Capabilities | ||
The Pezzo client enhances your developer experience by providing additional functionality to the OpenAI API. This is done through the second argument of the `createChatCompletion` method. | ||
#### Variables | ||
You can specify variables that will be interpolated by the Pezzo client before sending the request to OpenAI. This is useful if you want to use the same prompt for multiple requests, but with different variables. | ||
<Tabs> | ||
<Tab title="With Prompt Management"> | ||
```ts | ||
const prompt = await pezzo.getPrompt("CheckAge"); | ||
const response = await openai.createChatCompletion(prompt, { | ||
variables: { | ||
age: 22, | ||
country: "France" | ||
} | ||
}); | ||
``` | ||
</Tab> | ||
<Tab title="Without Propmt Management"> | ||
```ts | ||
const response = await openai.createChatCompletion({ | ||
model: "gpt-3.5-turbo", | ||
temperature: 0, | ||
messages: [ | ||
{ | ||
role: "user", | ||
content: "Hey, my age is {age}. Am I allowed to buy alcohol in {country}?" | ||
} | ||
] | ||
}, { | ||
variables: { | ||
age: 22, | ||
country: "France" | ||
} | ||
}); | ||
``` | ||
</Tab> | ||
</Tabs> | ||
Notice the variables in the prompt. The Pezzo client will replace them with the values you specified in the `variables` object. | ||
#### Custom Properties | ||
You can also specify custom properties that will be sent to Pezzo. This is useful if you want to add additional information to your request, such as the user ID, or the request ID. This information will be visible in the **Requests** page of your Pezzo project, and you will be able to filter requests based on these properties. | ||
```ts | ||
const response = await openai.createChatCompletion({ | ||
... | ||
}, { | ||
properties: { | ||
userId: "some-user-id", | ||
traceId: "some-trace-id" | ||
} | ||
}); | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,89 @@ | ||
--- | ||
title: "Pezzo Client" | ||
--- | ||
|
||
The Pezzo client is an NPM package that allows you to easily integrate your application with Pezzo. | ||
|
||
## Getting Started | ||
|
||
### Intall the Pezzo Client | ||
|
||
Install the [@pezzo/client](https://www.npmjs.com/package/@pezzo/client) NPM package: | ||
|
||
<CodeGroup> | ||
```bash npm | ||
npm i @pezzo/client | ||
``` | ||
```bash yarn | ||
yarn add @pezzo/client | ||
``` | ||
```bash pnpm | ||
pnpm add @pezzo/client | ||
``` | ||
</CodeGroup> | ||
|
||
### Initialize the Pezzo Client | ||
|
||
You only need to initialize the Pezzo client once, and then you can use it throughout your application. | ||
|
||
<CodeGroup> | ||
```ts libs/pezzo.ts | ||
import { Pezzo } from "@pezzo/client"; | ||
// Initialize the Pezzo client and export it | ||
export const pezzo = new Pezzo({ | ||
apiKey: "<Your Pezzo API key>", | ||
projectId: "<Your Pezzo project ID>", | ||
environment: "<Your desired environment>", | ||
}); | ||
// Initialize OpenAI | ||
const configuration = new Configuration({ | ||
apiKey: "<Your OpenAI API key>", | ||
}); | ||
``` | ||
</CodeGroup> | ||
In the above example, we created a `libs/pezzo.ts` file in which we instantiate the Pezzo client and export it. We can then import it in other areas of our application. | ||
<CardGroup cols={2}> | ||
<Card | ||
title="Use Pezzo with OpenAI" | ||
icon="bolt-lightning" | ||
href="/client/integrations/openai" | ||
> | ||
Learn how to use Pezzo to observe and manage your OpenAI API calls. | ||
</Card> | ||
</CardGroup> | ||
## API Reference | ||
<ResponseField name="Pezzo.constructor(options: PezzoOptions)" type="Function"> | ||
<div style={{ marginLeft: 20 }}> | ||
<ParamField path="options" type="PezzoOptions"> | ||
<div style={{ marginLeft: 20 }}> | ||
<ParamField path="apiKey" type="string" required="true"> | ||
Pezzo API key | ||
</ParamField> | ||
<ParamField path="projectId" type="string" required="true"> | ||
Pezzo project ID | ||
</ParamField> | ||
<ParamField path="environment" type="string" required="true"> | ||
Pezzo environment name | ||
</ParamField> | ||
<ParamField path="serverUrl" type="string" required="false" default="https://api.pezzo.ai"> | ||
Pezzo server URL | ||
</ParamField> | ||
</div> | ||
</ParamField> | ||
</div> | ||
</ResponseField> | ||
<ResponseField name="Pezzo.getPrompt(promptName: string)" type="Function"> | ||
<div style={{ marginLeft: 20 }}> | ||
<ParamField path="promptName" type="string"> | ||
The name of the prompt to retrieve. The prompt must be deployed to the current environment specified when initializing the Pezzo client. | ||
</ParamField> | ||
</div> | ||
</ResponseField> |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,45 @@ | ||
--- | ||
title: 'What is Pezzo?' | ||
--- | ||
|
||
Pezzo is a powerful open-source toolkit designed to streamline the process of AI development. It empowers developers and teams to leverage the full potential of AI models in their applications with ease. | ||
|
||
<Frame> | ||
<img src="https://cdn.pezzo.ai/banner.png" /> | ||
</Frame> | ||
|
||
<br /> | ||
|
||
# Key Features | ||
- ๐๏ธ **Centralized Prompt Management:** Manage all AI prompts in one place for maximum visibility and efficiency. | ||
- ๐ **Streamlined Prompt Design & Versioning:** Create, edit, test and version promps with ease. | ||
- ๐ **Instant Deployments:** Pezzo allwos you to publish your prompts instnatly, without requiring a full release cycle. | ||
- ๐ **Observability**: Access detailed prompt execution history, stats and metrics (duration, prompt cost, completion cost, etc.) for better insights. | ||
- ๐ ๏ธ **Troubleshooting:** Effortlessly resolve issues with your prompts. Time travel to retroactively fine-tune failed prompts and commit the fix instantly. | ||
- ๐ฐ **Cost Transparency**: Gain comprehensive cost transparency across all prompts and AI models. | ||
- ๐ช **Prompt Consumption:** Reduce code overhead by 90% by consuming your AI prompts using the Pezzo Client, regardless of the model provider. | ||
|
||
# Next Steps | ||
<CardGroup cols={2}> | ||
<Card | ||
title="Observability" | ||
icon="eye" | ||
href="/platform/observability/overview" | ||
> | ||
Learn about Pezzo's robust observability features. | ||
</Card> | ||
<Card | ||
title="Prompt Management" | ||
icon="wrench" | ||
href="/platform/prompt-management/overview" | ||
> | ||
Learn how you can streamline your AI delivery with Pezzo. | ||
</Card> | ||
<Card | ||
title="Recipe: OpenAI With Pezzo" | ||
icon="code" | ||
href="/client/integrations/openai" | ||
> | ||
Get started with Pezzo and OpenAI in 5 minutes. | ||
</Card> | ||
</CardGroup> |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.