Skip to content

Commit

Permalink
update docs to reference litellm
Browse files Browse the repository at this point in the history
  • Loading branch information
snopoke committed May 20, 2024
1 parent 34bf25f commit 3009a9e
Showing 1 changed file with 10 additions and 11 deletions.
21 changes: 10 additions & 11 deletions ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,15 @@ This section covers how it works and the various supported options.

You can choose between two options for your LLM chat: OpenAI and LLM (generic).
The OpenAI option limits you to OpenAI models, but supports streaming and asynchronous API access.
The generic "LLM" option uses the [llm library](https://github.com/simonw/llm) and can be used with many different
models---including local ones. However, it does not yet support streaming responses.
The generic "LLM" option uses the [litellm library](https://docs.litellm.ai/docs/) and can be used with many different
models---including local ones.

We recommend choosing "OpenAI" unless you know you want to use a different model.

### Configuring OpenAI

If you're using OpenAI, you need to set `OPENAI_API_KEY` in your environment or settings file (`.env` in development).
You can also change the model used by setting `OPENAI_MODEL`, which defualts to `"gpt-3.5-turbo"`.
You can also change the model used by setting `OPENAI_MODEL`, which defaults to `"gpt-3.5-turbo"`.

See [this page](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key) for help
finding your OpenAI API key.
Expand All @@ -32,20 +32,19 @@ values in your `settings.py`. For example:

```python
LLM_MODELS = {
"gpt4": {"key": env("OPENAI_API_KEY", default="")},
"claude-3-opus": {"key": env("ANTHROPIC_API_KEY", default="")}, # requires llm-claude-3
"Meta-Llama-3-8B-Instruct": {}, # requires llm-gpt4all
"gpt-3.5-turbo": {"api_key": env("OPENAI_API_KEY", default="")},
"gpt4": {"api_key": env("OPENAI_API_KEY", default="")},
"claude-3-opus-20240229": {"api_key": env("ANTHROPIC_API_KEY", default="")},
"ollama_chat/llama3": {"api_base": env("OLLAMA_API_BASE", default="http://localhost:11434")}, # requires a running ollama instance
}
DEFAULT_LLM_MODEL = "gpt4"
DEFAULT_LLM_MODEL = env("DEFAULT_LLM_MODEL", default="gpt4")
```

The chat UI will use whatever is set in `DEFAULT_LLM_MODEL` out-of-the-box, but you can quickly change it
to another model to try different options.

Any models that you add will need to be installed as [llm plugins](https://llm.datasette.io/en/stable/plugins/index.html).
You can do this by putting them in your requirements files, [as outlined here](./python.md#adding-or-removing-a-package).
For example, to use Claude 3 you need to add the [`llm-claude-3` plugin](https://github.com/simonw/llm-claude-3),
and to use local models like Llama 3, you need [`llm-gpt4all`](https://github.com/simonw/llm-gpt4all).
For further reading, see the documentation of the [litellm Python API](https://docs.litellm.ai/docs/completion),
and [litellm providers](https://docs.litellm.ai/docs/providers).

For further reading, see the documentation of the [llm Python API](https://llm.datasette.io/en/stable/python-api.html),
and [llm generally](https://llm.datasette.io/en/stable/index.html).
Expand Down

0 comments on commit 3009a9e

Please sign in to comment.