diff --git a/README.md b/README.md index 3f89b69c..387c8859 100644 --- a/README.md +++ b/README.md @@ -15,17 +15,17 @@ pip install prompttools You can run a simple example of a `prompttools` with the following ``` -DEBUG=1 python examples/prompttests/example.py +DEBUG=1 python examples/prompttests/test_openai_chat.py ``` To run the example outside of `DEBUG` mode, you'll need to bring your own OpenAI API key. This is because `prompttools` makes a call to OpenAI from your machine. For example: ``` -OPENAI_API_KEY=sk-... python examples/prompttests/example.py +OPENAI_API_KEY=sk-... python examples/prompttests/test_openai_chat.py ``` -You can see the full example [here](/examples/prompttests/example.py). +You can see the full example [here](/examples/prompttests/test_openai_chat.py). ## Using `prompttools` @@ -33,7 +33,7 @@ You can see the full example [here](/examples/prompttests/example.py). There are primarily two ways you can use `prompttools` in your LLM workflow: 1. Run experiments in [notebooks](/examples/notebooks/). -1. Write [unit tests](/examples/prompttests/example.py) and integrate them into your CI/CD workflow [via Github Actions](/.github/workflows/post-commit.yaml). +1. Write [unit tests](/examples/prompttests/test_openai_chat.py) and integrate them into your CI/CD workflow [via Github Actions](/.github/workflows/post-commit.yaml). ### Notebooks @@ -83,7 +83,7 @@ You can also manually enter feedback to evaluate prompts, see [HumanFeedback.ipy ### Unit Tests -Unit tests in `prompttools` are called `prompttests`. They use the `@prompttest` annotation to transform an evaluation function into an efficient unit test. The `prompttest` framework executes and evaluates experiments so you can test prompts over time. You can see an example test [here](/examples/prompttests/example.py) and an example of that test being used as a Github Action [here](/.github/workflows/post-commit.yaml). +Unit tests in `prompttools` are called `prompttests`. They use the `@prompttest` annotation to transform an evaluation function into an efficient unit test. The `prompttest` framework executes and evaluates experiments so you can test prompts over time. You can see an example test [here](/examples/prompttests/test_openai_chat.py) and an example of that test being used as a Github Action [here](/.github/workflows/post-commit.yaml). ### Persisting Results diff --git a/docs/source/index.rst b/docs/source/index.rst index 7e1f6f5b..4d207695 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -3,18 +3,51 @@ You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. -Welcome to prompttools's documentation! +Welcome to PromptTools! ======================================= .. toctree:: :maxdepth: 2 :caption: Contents: +Welcome to `prompttools` created by [Hegel AI](https://hegel-ai.com/)! +This repo offers a set of free, open-source tools for testing and experimenting with prompts. +The core idea is to enable developers to evaluate prompts using familiar interfaces +like _code_ and _notebooks_. -Indices and tables +To stay in touch with us about issues and future updates, +join the [Discord](https://discord.gg/7KeRPNHGdJ). + + +Quickstart +================== + +To install `prompttools`, you can use `pip`: + +``` +pip install prompttools +``` + +You can run a simple example of a `prompttools` with the following + +``` +DEBUG=1 python examples/prompttests/example.py +``` + +To run the example outside of `DEBUG` mode, you'll need to bring your own OpenAI API key. +This is because `prompttools` makes a call to OpenAI from your machine. For example: + +``` +OPENAI_API_KEY=sk-... python examples/prompttests/example.py +``` + +You can see the full example [here](https://github.com/hegelai/prompttools/tree/main/examples/prompttests/test_openai_chat.py). + +Using `prompttools` ================== -* :ref:`genindex` -* :ref:`modindex` -* :ref:`search` +There are primarily two ways you can use `prompttools` in your LLM workflow: + +1. Run experiments in [notebooks](https://github.com/hegelai/prompttools/tree/main/examples/notebooks/). +1. Write [unit tests](https://github.com/hegelai/prompttools/tree/main/examples/prompttests/test_openai_chat.py) and integrate them into your CI/CD workflow [via Github Actions](/.github/workflows/post-commit.yaml).