Skip to content

Commit

Permalink
Improve README and basic notebook example
Browse files Browse the repository at this point in the history
  • Loading branch information
NivekT committed Jul 24, 2023
1 parent e0286c7 commit 20f58de
Show file tree
Hide file tree
Showing 2 changed files with 33 additions and 20 deletions.
44 changes: 28 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
<a href="https://hegel-ai.com"><img src="https://upload.wikimedia.org/wikipedia/commons/5/51/Owl_of_Minerva.svg" width="75" height="75"></a>
</p>
<h1 align="center">
PromptTools
PromptTools
</h1>
<p align="center">
:wrench: Test and experiment with prompts, LLMs, and vector databases. :hammer:
Expand Down Expand Up @@ -32,22 +32,13 @@ To install `prompttools`, you can use `pip`:
pip install prompttools
```

You can run a simple example of a `prompttools` with the following
You can run a simple example of a `prompttools` locally with the following

```
DEBUG=1 python examples/prompttests/test_openai_chat.py
git clone https://github.com/hegelai/prompttools.git
cd prompttools && jupyter notebook examples/notebooks/OpenAIChatExperiment.ipynb
```

To run the example outside of `DEBUG` mode, you'll need to bring your own OpenAI API key.
This is because `prompttools` makes a call to OpenAI from your machine. For example:

```
OPENAI_API_KEY=sk-... python examples/prompttests/test_openai_chat.py
```

You can see the full example [here](/examples/prompttests/test_openai_chat.py).


## Using `prompttools`

There are primarily two ways you can use `prompttools` in your LLM workflow:
Expand Down Expand Up @@ -99,9 +90,9 @@ You can also manually enter feedback to evaluate prompts, see [HumanFeedback.ipy

![image](img/feedback.png)

> Note: Above we used an `ExperimentationHarness`. Under the hood, that harness uses an `Experiment` to construct and make API calls to LLMs. The harness is responsible for managing higher level abstractions, like prompt templates or system prompts. To see how experiments work at a low level, [see this example](/examples/notebooks/BasicExperiment.ipynb).
> Note: Above we used an `ExperimentationHarness`. Under the hood, that harness uses an `Experiment` to construct and make API calls to LLMs. The harness is responsible for managing higher level abstractions, like prompt templates or system prompts. To see how experiments work at a low level, [see this example](/examples/notebooks/OpenAIChatExperiment.ipynb).
### Unit Tests
### Using `prompttools` for Continuous Testing

Unit tests in `prompttools` are called `prompttests`. They use the `@prompttest` annotation to transform an evaluation function into an efficient unit test. The `prompttest` framework executes and evaluates experiments so you can test prompts over time. You can see an example test [here](/examples/prompttests/test_openai_chat.py) and an example of that test being used as a Github Action [here](/.github/workflows/post-commit.yaml).

Expand All @@ -111,6 +102,24 @@ To persist the results of your tests and experiments, you can export your `Exper
`to_json`, or `to_lora_json`. We are happy to further discuss your use cases, pain points, and what export
options may be useful for you.

### Setting API keys

If you would like to use a remote API (e.g. OpenAI, Anthropic), you will need to bring your own OpenAI API key.
This is because `prompttools` makes a call to those APIs directly from your machine.

In Python, you can set:
```python
import os
os.environ['OPENAI_API_KEY'] = ""
```

In command line:
```
OPENAI_API_KEY=sk-... python examples/prompttests/test_openai_chat.py
```

You will find [more examples of these in our notebooks](/examples/notebooks/).

### Documentation

Our [documentation website](https://prompttools.readthedocs.io/en/latest/index.html) contains the full API reference
Expand All @@ -134,9 +143,12 @@ You can then proceed to run [our examples](/examples/notebooks/).

### Frequently Asked Questions (FAQs)

1. Will this library forward my LLM calls to a server before sending it to OpenAI/Anthropic/etc?
1. Will this library forward my LLM calls to a server before sending it to OpenAI, Anthropic, and etc.?
- No, the source code will be executed on your machine. Any call to LLM APIs will be directly executed from your machine without any forwarding.

2. Does `prompttools` store my API keys or LLM inputs and outputs to a server?
- No, all data stay on your local machine.

## Contributing

We welcome PRs and suggestions! Don't hesitate to open a PR/issue or to reach out to us [via email](mailto:[email protected]).
Expand Down
9 changes: 5 additions & 4 deletions examples/notebooks/OpenAIChatExperiment.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@
"outputs": [],
"source": [
"import os\n",
"os.environ['DEBUG']=\"1\"\n",
"os.environ['OPENAI_API_KEY'] = \"\""
"os.environ['DEBUG']=\"1\" # Set this to \"\" to call OpenAI's API\n",
"os.environ['OPENAI_API_KEY'] = \"\" # Insert your key here"
]
},
{
Expand All @@ -69,7 +69,7 @@
"metadata": {},
"outputs": [],
"source": [
"from typing import Dict, List, Tuple\n",
"from typing import Dict, List\n",
"from prompttools.experiment import OpenAIChatExperiment"
]
},
Expand Down Expand Up @@ -102,6 +102,7 @@
" {\"role\": \"user\", \"content\": \"Who was the first president?\"},\n",
"]]\n",
"temperatures = [0.0, 1.0]\n",
"# You can add more parameters that you'd like to test here.\n",
"\n",
"experiment = OpenAIChatExperiment(models, messages, temperature=temperatures)"
]
Expand Down Expand Up @@ -157,7 +158,7 @@
"\n",
"\n",
"def measure_similarity(\n",
" messages: List[Dict[str, str]], results: Dict, metadata: Dict\n",
" messages: List[Dict[str, str]], results: Dict, _metadata: Dict\n",
") -> float:\n",
" \"\"\"\n",
" A simple test that checks semantic similarity between the user input\n",
Expand Down

0 comments on commit 20f58de

Please sign in to comment.