Skip to content

Commit

Permalink
Update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
steventkrawczyk committed Jul 17, 2023
1 parent 6fd8617 commit 3e2bfa0
Show file tree
Hide file tree
Showing 2 changed files with 43 additions and 10 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,25 +15,25 @@ pip install prompttools
You can run a simple example of a `prompttools` with the following

```
DEBUG=1 python examples/prompttests/example.py
DEBUG=1 python examples/prompttests/test_openai_chat.py
```

To run the example outside of `DEBUG` mode, you'll need to bring your own OpenAI API key.
This is because `prompttools` makes a call to OpenAI from your machine. For example:

```
OPENAI_API_KEY=sk-... python examples/prompttests/example.py
OPENAI_API_KEY=sk-... python examples/prompttests/test_openai_chat.py
```

You can see the full example [here](/examples/prompttests/example.py).
You can see the full example [here](/examples/prompttests/test_openai_chat.py).


## Using `prompttools`

There are primarily two ways you can use `prompttools` in your LLM workflow:

1. Run experiments in [notebooks](/examples/notebooks/).
1. Write [unit tests](/examples/prompttests/example.py) and integrate them into your CI/CD workflow [via Github Actions](/.github/workflows/post-commit.yaml).
1. Write [unit tests](/examples/prompttests/test_openai_chat.py) and integrate them into your CI/CD workflow [via Github Actions](/.github/workflows/post-commit.yaml).

### Notebooks

Expand Down Expand Up @@ -83,7 +83,7 @@ You can also manually enter feedback to evaluate prompts, see [HumanFeedback.ipy
### Unit Tests

Unit tests in `prompttools` are called `prompttests`. They use the `@prompttest` annotation to transform an evaluation function into an efficient unit test. The `prompttest` framework executes and evaluates experiments so you can test prompts over time. You can see an example test [here](/examples/prompttests/example.py) and an example of that test being used as a Github Action [here](/.github/workflows/post-commit.yaml).
Unit tests in `prompttools` are called `prompttests`. They use the `@prompttest` annotation to transform an evaluation function into an efficient unit test. The `prompttest` framework executes and evaluates experiments so you can test prompts over time. You can see an example test [here](/examples/prompttests/test_openai_chat.py) and an example of that test being used as a Github Action [here](/.github/workflows/post-commit.yaml).

### Persisting Results

Expand Down
43 changes: 38 additions & 5 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,51 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to prompttools's documentation!
Welcome to PromptTools!
=======================================

.. toctree::
:maxdepth: 2
:caption: Contents:

Welcome to `prompttools` created by [Hegel AI](https://hegel-ai.com/)!
This repo offers a set of free, open-source tools for testing and experimenting with prompts.
The core idea is to enable developers to evaluate prompts using familiar interfaces
like _code_ and _notebooks_.


Indices and tables
To stay in touch with us about issues and future updates,
join the [Discord](https://discord.gg/7KeRPNHGdJ).


Quickstart
==================

To install `prompttools`, you can use `pip`:

```
pip install prompttools
```

You can run a simple example of a `prompttools` with the following

```
DEBUG=1 python examples/prompttests/example.py
```

To run the example outside of `DEBUG` mode, you'll need to bring your own OpenAI API key.
This is because `prompttools` makes a call to OpenAI from your machine. For example:

```
OPENAI_API_KEY=sk-... python examples/prompttests/example.py
```

You can see the full example [here](https://github.com/hegelai/prompttools/tree/main/examples/prompttests/test_openai_chat.py).

Using `prompttools`
==================

* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
There are primarily two ways you can use `prompttools` in your LLM workflow:

1. Run experiments in [notebooks](https://github.com/hegelai/prompttools/tree/main/examples/notebooks/).
1. Write [unit tests](https://github.com/hegelai/prompttools/tree/main/examples/prompttests/test_openai_chat.py) and integrate them into your CI/CD workflow [via Github Actions](/.github/workflows/post-commit.yaml).

0 comments on commit 3e2bfa0

Please sign in to comment.