diff --git a/README.md b/README.md index 4238983..65af80b 100644 --- a/README.md +++ b/README.md @@ -71,6 +71,7 @@ This is an early work-in-progress. Follow [me on twitter](https://x.com/klntsky) - [ ] Runtime system - [x] Support variable definition at runtime - [x] dynamic model switching (via `MODEL` variable - [example](./python/examples/model-change.metaprompt)) + - [ ] Multiple chat instances and ability to switch between them, to distribute data between chat contexts. E.g. `[$chat1: the object is the moon][$chat2: the object is the sun][$chat1: what is the object?]` - [ ] exceptions - [ ] throwing exceptions - [ ] recovering from exceptions diff --git a/docs/index.md b/docs/index.md index 89c475d..756b639 100644 --- a/docs/index.md +++ b/docs/index.md @@ -30,9 +30,9 @@ MetaPrompt's basic use case is substituting parameter values instead of variable Write me a poem about [:subject] in the style of [:style] ``` -## Meta-prompting +## Prompt rewriting -Meta-prompting is a technique of asking an LLM to create/modify/expand an LLM prompt. +Prompt rewriting is a technique of asking an LLM to create/modify/expand an LLM prompt. - Dynamically crafting task-specific prompts based on a set of high level principles - Modifying prompts to increase accuracy diff --git a/python/examples/choose-model.metaprompt b/python/examples/choose-model.metaprompt new file mode 100644 index 0000000..fe87c62 --- /dev/null +++ b/python/examples/choose-model.metaprompt @@ -0,0 +1,10 @@ +[$ You must choose the best LLM for a given prompt, considering all the options listed below: + +- `gpt-3.5-turbo` - Best for tasks that don't require reasoning, like data processing or generation. +- `gpt-4o` - Will offer a very high level of intelligence and strong performance. Suitable for programming tasks that require reasoning about software architecture. +- `gpt-4o-mini` - Suitable for programming tasks that involve straightforward implementations, like utility functions. + +Output format: output ONLY the model ID without qoutes. + +The prompt: [:prompt] +] diff --git a/python/examples/model-selection-demo.metaprompt b/python/examples/model-selection-demo.metaprompt new file mode 100644 index 0000000..d3c2281 --- /dev/null +++ b/python/examples/model-selection-demo.metaprompt @@ -0,0 +1,3 @@ +[:MODEL=[:use ./choose-model :prompt=[:prompt]]] +Selected model: [:MODEL] +[$[:prompt]] \ No newline at end of file diff --git a/python/src/runtime.py b/python/src/runtime.py index abe9792..25eeefd 100644 --- a/python/src/runtime.py +++ b/python/src/runtime.py @@ -12,7 +12,7 @@ def __init__(self, config, env): self.cwd = os.getcwd() def get_current_model(self): - return self.env.get("MODEL") + return self.env.get("MODEL").strip() def set_variable(self, var_name, value): self.env.set(var_name, value)