From 6645b6afe8c5f65f5c1466ece2f2dbeb4baaa017 Mon Sep 17 00:00:00 2001 From: klntsky Date: Sat, 23 Nov 2024 23:28:21 +0000 Subject: [PATCH] deploy: 836defb10d92868ea670de955920ebfc72cc2292 --- index.html | 2 +- search/search_index.json | 2 +- syntax/index.html | 7 +------ 3 files changed, 3 insertions(+), 8 deletions(-) diff --git a/index.html b/index.html index aea5855..b0b7e05 100644 --- a/index.html +++ b/index.html @@ -170,5 +170,5 @@

Links& diff --git a/search/search_index.json b/search/search_index.json index 256f4f6..34fe87b 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Overview \u00b6 Metaprompt is a template language for LLM prompt automation, reuse and structuring, with support for writing prompts with prompts. It adds a number of syntactic constructs to plaintext prompts, that get expanded at run time, producing textual output: variables conditionals LLM calls function calls etc. Project status \u00b6 !!! This is an early work-in-progress !!! Not all of the described features have been implemented. The repository README will give you more details. Use cases \u00b6 Templating \u00b6 MetaPrompt's basic use case is substituting parameter values instead of variable names embedded in a prompt: Write me a poem about [:subject] in the style of [:style] Prompt rewriting \u00b6 Prompt rewriting is a technique of asking an LLM to create/modify/expand an LLM prompt. Dynamically crafting task-specific prompts based on a set of high level principles Modifying prompts to increase accuracy Securing inputs from prompt injection attacks and for content moderation Selecting the most suitable model for a task Quick example: [$ You are an LLM prompt engineer. Improve this prompt by adding specific details: [:prompt] ] Prompt structuring \u00b6 A module system and a package system enable parameterized prompt reuse and publishing. Knowledge base maintenance \u00b6 Organize your knowledge base in the form of multiple documents loaded conditionally on demand. Links \u00b6 GitHub repo Documentation Author's twitter","title":"Home"},{"location":"#overview","text":"Metaprompt is a template language for LLM prompt automation, reuse and structuring, with support for writing prompts with prompts. It adds a number of syntactic constructs to plaintext prompts, that get expanded at run time, producing textual output: variables conditionals LLM calls function calls etc.","title":"Overview"},{"location":"#project-status","text":"!!! This is an early work-in-progress !!! Not all of the described features have been implemented. The repository README will give you more details.","title":"Project status"},{"location":"#use-cases","text":"","title":"Use cases"},{"location":"#templating","text":"MetaPrompt's basic use case is substituting parameter values instead of variable names embedded in a prompt: Write me a poem about [:subject] in the style of [:style]","title":"Templating"},{"location":"#prompt-rewriting","text":"Prompt rewriting is a technique of asking an LLM to create/modify/expand an LLM prompt. Dynamically crafting task-specific prompts based on a set of high level principles Modifying prompts to increase accuracy Securing inputs from prompt injection attacks and for content moderation Selecting the most suitable model for a task Quick example: [$ You are an LLM prompt engineer. Improve this prompt by adding specific details: [:prompt] ]","title":"Prompt rewriting"},{"location":"#prompt-structuring","text":"A module system and a package system enable parameterized prompt reuse and publishing.","title":"Prompt structuring"},{"location":"#knowledge-base-maintenance","text":"Organize your knowledge base in the form of multiple documents loaded conditionally on demand.","title":"Knowledge base maintenance"},{"location":"#links","text":"GitHub repo Documentation Author's twitter","title":"Links"},{"location":"syntax/","text":"Text \u00b6 A textual prompt is usually a valid metaprompt: Hi, LLM! How are you feeling today? will be expanded to the same string, because it does not contain any MetaPrompt constructs. Variables \u00b6 Variables should be referenced using this sintax: [:variable_name] . Variable names should match [a-zA-Z_][a-zA-Z0-9_]* . The syntax for assignments is [:variable_name=any expression] . Optional assignments use ?= instead of = - they update the value only if a variable is unset. Comments \u00b6 [# Text for the human reader can be written like this. Comments must be well-formed metaprompt expressions too - in the future comment parse trees will be used to convey additional semantic info (e.g. documentation). Comments are ignored during evaluation. ] Conditionals \u00b6 [:if the sky is sometimes blue :then this... :else that... ] :if expressions will be expanded at runtime. First, the following text will be fed to an LLM: Please determine if the following statement is true. Do not write any other output, answer just \"true\" or \"false\". The statement: the sky is sometimes blue The answer will determine the execution branch. If the answer is not literally \"true\" or \"false\", an exception will be thrown after a few retries Meta-prompting \u00b6 LLM says: [$ Hi, LLM! How are you today?] The [$ prompt will be executed and its output will be inserted at its position during expansion. This enables powerful techniques of prompt rewriting: [$ [$ Improve this LLM prompt: [:prompt]]] Notice the double nesting of [$ - the improved prompt will be fed back into an LLM. Chat history \u00b6 Chat history can be preserved by assigning an ID to a prompt: [some_chat_id$ ... ] . Subsequent invocations with the same chat ID within the same module will have a memory of the preceding conversation. Chat IDs are actually variables that contain an entire chat history (example usage) . Escaping \u00b6 Normally, you would not need escaping, e.g. [:foo will evaluate to [:foo as text. But if you want to specify a MetaPrompt expression literally, use \\ before the [ character: \\[:foo] will evaluate to [:foo] as text, without special meaning. You can escape \\ with another \\ , but only if it is positioned before a [ : \\foo \u2192 (text \\foo ) \\\\ \u2192 (text \\\\ ) \\[:foo] \u2192 (text [:foo] ) \\\\[:foo] \u2192 (text \\\\ ) (variable foo ) \\[some text -> (text [some text ) - note that in this case the \\ character disappears, although escaping does not happen because [some text is not a special MetaPrompt construct. Modules \u00b6 Every .metaprompt file is a module. Conceptually, a module is a function that accepts a number of arguments, runs the executable parts, and returns text. File imports \u00b6 The following expression will include ./relative-import.metaprompt file (relative to the directory of the file, NOT to the current working dir): [:use ./relative-import] Package imports \u00b6 NOT IMPLEMENTED Passing parameters \u00b6 Unbound variables used in a module are its parameters, that must be provided when calling the module. Example Consider a file named hello.metaprompt : Hello, [:what]! [# `what` is a required parameter ] [:who?=you] [# ^ `who` is NOT a required parameter, because it is assigned before first use. However, optional assignment is used, so the default value can be overridden from the caller module ] How are [:who] feeling today? The hello module can be used from another module : [:use ./hello :what=world :who=we] Special variables \u00b6 MODEL switching \u00b6 MODEL variable is used to switch LLM models on the fly (example) . MODEL switching only works before an [:if ... or a [$ ... ] block: [:MODEL=gpt-4o] [$ will be run in 4o, [:MODEL=gpt-3.5-turbo] [$ but this prompt will run in 3.5-turbo ] ] Dynamic model selection based on a given task description allows to save costs by avoiding calls to costly models when possible (example) . ROLE switching \u00b6 ROLE is a special variable used to control LLM input \"role\". ROLE can be assigned to one of three values: system : defines the behavior, scope, and context of the LLM. user : represents the individual interacting with the LLM. assistant : represents the LLM itself, responding to the user within the context defined by the system. See OpenAI docs for more info on roles. (example) Live STATUS update \u00b6 STATUS variable provides a way to set a status line that is visible in the terminal. Useful to make the user aware of what is happening when no output is being generated: [:STATUS=running a marathon] [$ ... long running task ] [:STATUS=launching the rockets] [$ ... another long running task ] (example)","title":"Syntax"},{"location":"syntax/#text","text":"A textual prompt is usually a valid metaprompt: Hi, LLM! How are you feeling today? will be expanded to the same string, because it does not contain any MetaPrompt constructs.","title":"Text"},{"location":"syntax/#variables","text":"Variables should be referenced using this sintax: [:variable_name] . Variable names should match [a-zA-Z_][a-zA-Z0-9_]* . The syntax for assignments is [:variable_name=any expression] . Optional assignments use ?= instead of = - they update the value only if a variable is unset.","title":"Variables"},{"location":"syntax/#comments","text":"[# Text for the human reader can be written like this. Comments must be well-formed metaprompt expressions too - in the future comment parse trees will be used to convey additional semantic info (e.g. documentation). Comments are ignored during evaluation. ]","title":"Comments"},{"location":"syntax/#conditionals","text":"[:if the sky is sometimes blue :then this... :else that... ] :if expressions will be expanded at runtime. First, the following text will be fed to an LLM: Please determine if the following statement is true. Do not write any other output, answer just \"true\" or \"false\". The statement: the sky is sometimes blue The answer will determine the execution branch. If the answer is not literally \"true\" or \"false\", an exception will be thrown after a few retries","title":"Conditionals"},{"location":"syntax/#meta-prompting","text":"LLM says: [$ Hi, LLM! How are you today?] The [$ prompt will be executed and its output will be inserted at its position during expansion. This enables powerful techniques of prompt rewriting: [$ [$ Improve this LLM prompt: [:prompt]]] Notice the double nesting of [$ - the improved prompt will be fed back into an LLM.","title":"Meta-prompting"},{"location":"syntax/#chat-history","text":"Chat history can be preserved by assigning an ID to a prompt: [some_chat_id$ ... ] . Subsequent invocations with the same chat ID within the same module will have a memory of the preceding conversation. Chat IDs are actually variables that contain an entire chat history (example usage) .","title":"Chat history"},{"location":"syntax/#escaping","text":"Normally, you would not need escaping, e.g. [:foo will evaluate to [:foo as text. But if you want to specify a MetaPrompt expression literally, use \\ before the [ character: \\[:foo] will evaluate to [:foo] as text, without special meaning. You can escape \\ with another \\ , but only if it is positioned before a [ : \\foo \u2192 (text \\foo ) \\\\ \u2192 (text \\\\ ) \\[:foo] \u2192 (text [:foo] ) \\\\[:foo] \u2192 (text \\\\ ) (variable foo ) \\[some text -> (text [some text ) - note that in this case the \\ character disappears, although escaping does not happen because [some text is not a special MetaPrompt construct.","title":"Escaping"},{"location":"syntax/#modules","text":"Every .metaprompt file is a module. Conceptually, a module is a function that accepts a number of arguments, runs the executable parts, and returns text.","title":"Modules"},{"location":"syntax/#file-imports","text":"The following expression will include ./relative-import.metaprompt file (relative to the directory of the file, NOT to the current working dir): [:use ./relative-import]","title":"File imports"},{"location":"syntax/#package-imports","text":"NOT IMPLEMENTED","title":"Package imports"},{"location":"syntax/#passing-parameters","text":"Unbound variables used in a module are its parameters, that must be provided when calling the module. Example Consider a file named hello.metaprompt : Hello, [:what]! [# `what` is a required parameter ] [:who?=you] [# ^ `who` is NOT a required parameter, because it is assigned before first use. However, optional assignment is used, so the default value can be overridden from the caller module ] How are [:who] feeling today? The hello module can be used from another module : [:use ./hello :what=world :who=we]","title":"Passing parameters"},{"location":"syntax/#special-variables","text":"","title":"Special variables"},{"location":"syntax/#model-switching","text":"MODEL variable is used to switch LLM models on the fly (example) . MODEL switching only works before an [:if ... or a [$ ... ] block: [:MODEL=gpt-4o] [$ will be run in 4o, [:MODEL=gpt-3.5-turbo] [$ but this prompt will run in 3.5-turbo ] ] Dynamic model selection based on a given task description allows to save costs by avoiding calls to costly models when possible (example) .","title":"MODEL switching"},{"location":"syntax/#role-switching","text":"ROLE is a special variable used to control LLM input \"role\". ROLE can be assigned to one of three values: system : defines the behavior, scope, and context of the LLM. user : represents the individual interacting with the LLM. assistant : represents the LLM itself, responding to the user within the context defined by the system. See OpenAI docs for more info on roles. (example)","title":"ROLE switching"},{"location":"syntax/#live-status-update","text":"STATUS variable provides a way to set a status line that is visible in the terminal. Useful to make the user aware of what is happening when no output is being generated: [:STATUS=running a marathon] [$ ... long running task ] [:STATUS=launching the rockets] [$ ... another long running task ] (example)","title":"Live STATUS update"},{"location":"tutorial/","text":"","title":"Tutorial"}]} \ No newline at end of file +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Overview \u00b6 Metaprompt is a template language for LLM prompt automation, reuse and structuring, with support for writing prompts with prompts. It adds a number of syntactic constructs to plaintext prompts, that get expanded at run time, producing textual output: variables conditionals LLM calls function calls etc. Project status \u00b6 !!! This is an early work-in-progress !!! Not all of the described features have been implemented. The repository README will give you more details. Use cases \u00b6 Templating \u00b6 MetaPrompt's basic use case is substituting parameter values instead of variable names embedded in a prompt: Write me a poem about [:subject] in the style of [:style] Prompt rewriting \u00b6 Prompt rewriting is a technique of asking an LLM to create/modify/expand an LLM prompt. Dynamically crafting task-specific prompts based on a set of high level principles Modifying prompts to increase accuracy Securing inputs from prompt injection attacks and for content moderation Selecting the most suitable model for a task Quick example: [$ You are an LLM prompt engineer. Improve this prompt by adding specific details: [:prompt] ] Prompt structuring \u00b6 A module system and a package system enable parameterized prompt reuse and publishing. Knowledge base maintenance \u00b6 Organize your knowledge base in the form of multiple documents loaded conditionally on demand. Links \u00b6 GitHub repo Documentation Author's twitter","title":"Home"},{"location":"#overview","text":"Metaprompt is a template language for LLM prompt automation, reuse and structuring, with support for writing prompts with prompts. It adds a number of syntactic constructs to plaintext prompts, that get expanded at run time, producing textual output: variables conditionals LLM calls function calls etc.","title":"Overview"},{"location":"#project-status","text":"!!! This is an early work-in-progress !!! Not all of the described features have been implemented. The repository README will give you more details.","title":"Project status"},{"location":"#use-cases","text":"","title":"Use cases"},{"location":"#templating","text":"MetaPrompt's basic use case is substituting parameter values instead of variable names embedded in a prompt: Write me a poem about [:subject] in the style of [:style]","title":"Templating"},{"location":"#prompt-rewriting","text":"Prompt rewriting is a technique of asking an LLM to create/modify/expand an LLM prompt. Dynamically crafting task-specific prompts based on a set of high level principles Modifying prompts to increase accuracy Securing inputs from prompt injection attacks and for content moderation Selecting the most suitable model for a task Quick example: [$ You are an LLM prompt engineer. Improve this prompt by adding specific details: [:prompt] ]","title":"Prompt rewriting"},{"location":"#prompt-structuring","text":"A module system and a package system enable parameterized prompt reuse and publishing.","title":"Prompt structuring"},{"location":"#knowledge-base-maintenance","text":"Organize your knowledge base in the form of multiple documents loaded conditionally on demand.","title":"Knowledge base maintenance"},{"location":"#links","text":"GitHub repo Documentation Author's twitter","title":"Links"},{"location":"syntax/","text":"Text \u00b6 A textual prompt is usually a valid metaprompt: Hi, LLM! How are you feeling today? will be expanded to the same string, because it does not contain any MetaPrompt constructs. Variables \u00b6 Variables should be referenced using this sintax: [:variable_name] . Variable names should match [a-zA-Z_][a-zA-Z0-9_]* . The syntax for assignments is [:variable_name=any expression] . Optional assignments use ?= instead of = - they update the value only if a variable is unset. Comments \u00b6 [# Text for the human reader can be written like this. Comments must be well-formed metaprompt expressions too - in the future comment parse trees will be used to convey additional semantic info (e.g. documentation). Comments are ignored during evaluation. ] Conditionals \u00b6 [:if the sky is sometimes blue :then this... :else that... ] :if expressions will be expanded at runtime. First, the following text will be fed to an LLM: Please determine if the following statement is true. Do not write any other output, answer just \"true\" or \"false\". The statement: the sky is sometimes blue The answer will determine the execution branch. If the answer is not literally \"true\" or \"false\", an exception will be thrown after a few retries Meta-prompting \u00b6 LLM says: [$ Hi, LLM! How are you today?] The [$ prompt will be executed and its output will be inserted at its position during expansion. This enables powerful techniques of prompt rewriting: [$ [$ Improve this LLM prompt: [:prompt]]] Notice the double nesting of [$ - the improved prompt will be fed back into an LLM. Chat history \u00b6 Chat history can be preserved by assigning an ID to a prompt: [some_chat_id$ ... ] . Subsequent invocations with the same chat ID within the same module will have a memory of the preceding conversation. Chat IDs are actually variables that contain an entire chat history (example usage) . Escaping \u00b6 [ and ] must be escaped using \\\\ . Modules \u00b6 Every .metaprompt file is a module. Conceptually, a module is a function that accepts a number of arguments, runs the executable parts, and returns text. File imports \u00b6 The following expression will include ./relative-import.metaprompt file (relative to the directory of the file, NOT to the current working dir): [:use ./relative-import] Package imports \u00b6 NOT IMPLEMENTED Passing parameters \u00b6 Unbound variables used in a module are its parameters, that must be provided when calling the module. Example Consider a file named hello.metaprompt : Hello, [:what]! [# `what` is a required parameter ] [:who?=you] [# ^ `who` is NOT a required parameter, because it is assigned before first use. However, optional assignment is used, so the default value can be overridden from the caller module ] How are [:who] feeling today? The hello module can be used from another module : [:use ./hello :what=world :who=we] Special variables \u00b6 MODEL switching \u00b6 MODEL variable is used to switch LLM models on the fly (example) . MODEL switching only works before an [:if ... or a [$ ... ] block: [:MODEL=gpt-4o] [$ will be run in 4o, [:MODEL=gpt-3.5-turbo] [$ but this prompt will run in 3.5-turbo ] ] Dynamic model selection based on a given task description allows to save costs by avoiding calls to costly models when possible (example) . ROLE switching \u00b6 ROLE is a special variable used to control LLM input \"role\". ROLE can be assigned to one of three values: system : defines the behavior, scope, and context of the LLM. user : represents the individual interacting with the LLM. assistant : represents the LLM itself, responding to the user within the context defined by the system. See OpenAI docs for more info on roles. (example) Live STATUS update \u00b6 STATUS variable provides a way to set a status line that is visible in the terminal. Useful to make the user aware of what is happening when no output is being generated: [:STATUS=running a marathon] [$ ... long running task ] [:STATUS=launching the rockets] [$ ... another long running task ] (example)","title":"Syntax"},{"location":"syntax/#text","text":"A textual prompt is usually a valid metaprompt: Hi, LLM! How are you feeling today? will be expanded to the same string, because it does not contain any MetaPrompt constructs.","title":"Text"},{"location":"syntax/#variables","text":"Variables should be referenced using this sintax: [:variable_name] . Variable names should match [a-zA-Z_][a-zA-Z0-9_]* . The syntax for assignments is [:variable_name=any expression] . Optional assignments use ?= instead of = - they update the value only if a variable is unset.","title":"Variables"},{"location":"syntax/#comments","text":"[# Text for the human reader can be written like this. Comments must be well-formed metaprompt expressions too - in the future comment parse trees will be used to convey additional semantic info (e.g. documentation). Comments are ignored during evaluation. ]","title":"Comments"},{"location":"syntax/#conditionals","text":"[:if the sky is sometimes blue :then this... :else that... ] :if expressions will be expanded at runtime. First, the following text will be fed to an LLM: Please determine if the following statement is true. Do not write any other output, answer just \"true\" or \"false\". The statement: the sky is sometimes blue The answer will determine the execution branch. If the answer is not literally \"true\" or \"false\", an exception will be thrown after a few retries","title":"Conditionals"},{"location":"syntax/#meta-prompting","text":"LLM says: [$ Hi, LLM! How are you today?] The [$ prompt will be executed and its output will be inserted at its position during expansion. This enables powerful techniques of prompt rewriting: [$ [$ Improve this LLM prompt: [:prompt]]] Notice the double nesting of [$ - the improved prompt will be fed back into an LLM.","title":"Meta-prompting"},{"location":"syntax/#chat-history","text":"Chat history can be preserved by assigning an ID to a prompt: [some_chat_id$ ... ] . Subsequent invocations with the same chat ID within the same module will have a memory of the preceding conversation. Chat IDs are actually variables that contain an entire chat history (example usage) .","title":"Chat history"},{"location":"syntax/#escaping","text":"[ and ] must be escaped using \\\\ .","title":"Escaping"},{"location":"syntax/#modules","text":"Every .metaprompt file is a module. Conceptually, a module is a function that accepts a number of arguments, runs the executable parts, and returns text.","title":"Modules"},{"location":"syntax/#file-imports","text":"The following expression will include ./relative-import.metaprompt file (relative to the directory of the file, NOT to the current working dir): [:use ./relative-import]","title":"File imports"},{"location":"syntax/#package-imports","text":"NOT IMPLEMENTED","title":"Package imports"},{"location":"syntax/#passing-parameters","text":"Unbound variables used in a module are its parameters, that must be provided when calling the module. Example Consider a file named hello.metaprompt : Hello, [:what]! [# `what` is a required parameter ] [:who?=you] [# ^ `who` is NOT a required parameter, because it is assigned before first use. However, optional assignment is used, so the default value can be overridden from the caller module ] How are [:who] feeling today? The hello module can be used from another module : [:use ./hello :what=world :who=we]","title":"Passing parameters"},{"location":"syntax/#special-variables","text":"","title":"Special variables"},{"location":"syntax/#model-switching","text":"MODEL variable is used to switch LLM models on the fly (example) . MODEL switching only works before an [:if ... or a [$ ... ] block: [:MODEL=gpt-4o] [$ will be run in 4o, [:MODEL=gpt-3.5-turbo] [$ but this prompt will run in 3.5-turbo ] ] Dynamic model selection based on a given task description allows to save costs by avoiding calls to costly models when possible (example) .","title":"MODEL switching"},{"location":"syntax/#role-switching","text":"ROLE is a special variable used to control LLM input \"role\". ROLE can be assigned to one of three values: system : defines the behavior, scope, and context of the LLM. user : represents the individual interacting with the LLM. assistant : represents the LLM itself, responding to the user within the context defined by the system. See OpenAI docs for more info on roles. (example)","title":"ROLE switching"},{"location":"syntax/#live-status-update","text":"STATUS variable provides a way to set a status line that is visible in the terminal. Useful to make the user aware of what is happening when no output is being generated: [:STATUS=running a marathon] [$ ... long running task ] [:STATUS=launching the rockets] [$ ... another long running task ] (example)","title":"Live STATUS update"},{"location":"tutorial/","text":"","title":"Tutorial"}]} \ No newline at end of file diff --git a/syntax/index.html b/syntax/index.html index ec18da0..b281e85 100644 --- a/syntax/index.html +++ b/syntax/index.html @@ -127,12 +127,7 @@

Chat historySubsequent invocations with the same chat ID within the same module will have a memory of the preceding conversation.

Chat IDs are actually variables that contain an entire chat history (example usage).

Escaping

-

Normally, you would not need escaping, e.g. [:foo will evaluate to [:foo as text. But if you want to specify a MetaPrompt expression literally, use \ before the [ character: \[:foo] will evaluate to [:foo] as text, without special meaning. You can escape \ with another \, but only if it is positioned before a [:

-

\foo → (text \foo)

-

\\ → (text \\)

-

\[:foo] → (text [:foo])

-

\\[:foo] → (text \\) (variable foo)

-

\[some text -> (text [some text) - note that in this case the \ character disappears, although escaping does not happen because [some text is not a special MetaPrompt construct.

+

[ and ] must be escaped using \\.

Modules

Every .metaprompt file is a module. Conceptually, a module is a function that accepts a number of arguments, runs the executable parts, and returns text.

File imports