From 6b45a71e67574e2ecd33967cdc5ca7faea99ab58 Mon Sep 17 00:00:00 2001 From: Craig Jellick Date: Thu, 20 Jun 2024 12:09:40 -0700 Subject: [PATCH 1/3] docs: add FAQ about caching Signed-off-by: Craig Jellick --- docs/docs/09-faqs.md | 24 ++++++++++++++++++++++-- 1 file changed, 22 insertions(+), 2 deletions(-) diff --git a/docs/docs/09-faqs.md b/docs/docs/09-faqs.md index a06e5818..c26418e9 100644 --- a/docs/docs/09-faqs.md +++ b/docs/docs/09-faqs.md @@ -1,11 +1,31 @@ # FAQs -#### I don't have Homebrew, how can I install GPTScript? +### I don't have Homebrew, how can I install GPTScript? On MacOS and Linux, you can alternatively install via: `curl https://get.gptscript.ai/install.sh | sh` On all supported systems, you download and install the archive for your platform and architecture from the [releases page](https://github.com/gptscript-ai/gptscript/releases). -#### Does GPTScript have an SDK or API I can program against? +### Does GPTScript have an SDK or API I can program against? Currently, there are three SDKs being maintained: [Python](https://github.com/gptscript-ai/py-gptscript), [Node](https://github.com/gptscript-ai/node-gptscript), and [Go](https://github.com/gptscript-ai/go-gptscript). They are currently under development and are being iterated on relatively rapidly. The READMEs in each repository contain the most up-to-date documentation for the functionality of each. + +### I see there's a --disable-cache flag. How does caching working in GPTScript? + +GPTScript leverages caching to speed up execution and reduce LLM costs. There are two areas cached by GPTScript: +- Git commit hash lookups for tools +- LLM responses + +Caching is enabled for both of these by default. It can be disabled via the `--disable-cache` flag. Below is an explanation of how these areas behave when caching is enabled and disabled. + +#### Git commit hash lookups for tools + +When a remote tool or context is included in your script (like so: `Tools: github.com/gptscript-ai/browser`) and then invoked during script execution, GPTScript will pull the Git repo for that tool and build it. The tool’s repo and build will be stored in your system’s cache directory (at [$XDG_CACHE_HOME](https://pkg.go.dev/os#UserCacheDir)/gptscript/repos). Subsequent invocations of the tool leverage that cache. When the cache is enabled, GPTScript will only check for a newer version of the tool once an hour; if an hour hasn’t passed since the last check, it will just use the one it has. If this is the first invocation and the tool doesn’t yet exist in the cache, it will be pulled and built as normal. + +When the cache is disabled, GPTScript will check that it has the latest version of the tool (meaning the latest git commit for the repo) on every single invocation of the tool. If GPTScript determines it already has the latest version, that build will be used as-is. In other words, disabling the cache DOES NOT force GPTScript to rebuild the tool, it only forces GPTScript to always check if it has the latest version. + +#### LLM responses + +With regards to LLM responses, when the cache is enabled GPTScript will cache the LLM’s response to a chat completion request. Each response is stored as a gob-encoded file in $XDG_CACHE_HOME/gptscript, where the file name is a hash of the chat completion request. + +It is important to note that all [messages in chat completion request](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) are used to generate the hash that is used as the file name. This means that every message between user and LLM affects the cache lookup. So, when using GPTScript in chat mode, it is very unlikely you’ll receive a cached LLM response. Conversely, non-chat GPTScript automations are much more likely to be consistent and thus make use of cached LLM responses. \ No newline at end of file From f5a4069fd92eb8e3ba82849865e36089c6fe74b6 Mon Sep 17 00:00:00 2001 From: Craig Jellick Date: Thu, 20 Jun 2024 12:55:06 -0700 Subject: [PATCH 2/3] docs: update generated cli docs Signed-off-by: Craig Jellick --- docs/docs/04-command-line-reference/gptscript.md | 2 +- docs/docs/04-command-line-reference/gptscript_eval.md | 2 +- docs/docs/04-command-line-reference/gptscript_fmt.md | 2 +- docs/docs/04-command-line-reference/gptscript_parse.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/docs/04-command-line-reference/gptscript.md b/docs/docs/04-command-line-reference/gptscript.md index 8c5d1459..4da342fb 100644 --- a/docs/docs/04-command-line-reference/gptscript.md +++ b/docs/docs/04-command-line-reference/gptscript.md @@ -30,7 +30,7 @@ gptscript [flags] PROGRAM_FILE [INPUT...] --force-chat Force an interactive chat session if even the top level tool is not a chat tool ($GPTSCRIPT_FORCE_CHAT) --force-sequential Force parallel calls to run sequentially ($GPTSCRIPT_FORCE_SEQUENTIAL) -h, --help help for gptscript - -f, --input string Read input from a file ("-" for stdin) ($GPTSCRIPT_INPUT) + -f, --input string Read input from a file ("-" for stdin) ($GPTSCRIPT_INPUT_FILE) --list-models List the models available and exit ($GPTSCRIPT_LIST_MODELS) --list-tools List built-in tools and exit ($GPTSCRIPT_LIST_TOOLS) --no-trunc Do not truncate long log messages ($GPTSCRIPT_NO_TRUNC) diff --git a/docs/docs/04-command-line-reference/gptscript_eval.md b/docs/docs/04-command-line-reference/gptscript_eval.md index 94710662..4c485c7c 100644 --- a/docs/docs/04-command-line-reference/gptscript_eval.md +++ b/docs/docs/04-command-line-reference/gptscript_eval.md @@ -38,7 +38,7 @@ gptscript eval [flags] --disable-cache Disable caching of LLM API responses ($GPTSCRIPT_DISABLE_CACHE) --dump-state string Dump the internal execution state to a file ($GPTSCRIPT_DUMP_STATE) --events-stream-to string Stream events to this location, could be a file descriptor/handle (e.g. fd://2), filename, or named pipe (e.g. \\.\pipe\my-pipe) ($GPTSCRIPT_EVENTS_STREAM_TO) - -f, --input string Read input from a file ("-" for stdin) ($GPTSCRIPT_INPUT) + -f, --input string Read input from a file ("-" for stdin) ($GPTSCRIPT_INPUT_FILE) --no-trunc Do not truncate long log messages ($GPTSCRIPT_NO_TRUNC) --openai-api-key string OpenAI API KEY ($OPENAI_API_KEY) --openai-base-url string OpenAI base URL ($OPENAI_BASE_URL) diff --git a/docs/docs/04-command-line-reference/gptscript_fmt.md b/docs/docs/04-command-line-reference/gptscript_fmt.md index c4e37856..511132d9 100644 --- a/docs/docs/04-command-line-reference/gptscript_fmt.md +++ b/docs/docs/04-command-line-reference/gptscript_fmt.md @@ -32,7 +32,7 @@ gptscript fmt [flags] --disable-cache Disable caching of LLM API responses ($GPTSCRIPT_DISABLE_CACHE) --dump-state string Dump the internal execution state to a file ($GPTSCRIPT_DUMP_STATE) --events-stream-to string Stream events to this location, could be a file descriptor/handle (e.g. fd://2), filename, or named pipe (e.g. \\.\pipe\my-pipe) ($GPTSCRIPT_EVENTS_STREAM_TO) - -f, --input string Read input from a file ("-" for stdin) ($GPTSCRIPT_INPUT) + -f, --input string Read input from a file ("-" for stdin) ($GPTSCRIPT_INPUT_FILE) --no-trunc Do not truncate long log messages ($GPTSCRIPT_NO_TRUNC) --openai-api-key string OpenAI API KEY ($OPENAI_API_KEY) --openai-base-url string OpenAI base URL ($OPENAI_BASE_URL) diff --git a/docs/docs/04-command-line-reference/gptscript_parse.md b/docs/docs/04-command-line-reference/gptscript_parse.md index d2322a48..3dde0073 100644 --- a/docs/docs/04-command-line-reference/gptscript_parse.md +++ b/docs/docs/04-command-line-reference/gptscript_parse.md @@ -32,7 +32,7 @@ gptscript parse [flags] --disable-cache Disable caching of LLM API responses ($GPTSCRIPT_DISABLE_CACHE) --dump-state string Dump the internal execution state to a file ($GPTSCRIPT_DUMP_STATE) --events-stream-to string Stream events to this location, could be a file descriptor/handle (e.g. fd://2), filename, or named pipe (e.g. \\.\pipe\my-pipe) ($GPTSCRIPT_EVENTS_STREAM_TO) - -f, --input string Read input from a file ("-" for stdin) ($GPTSCRIPT_INPUT) + -f, --input string Read input from a file ("-" for stdin) ($GPTSCRIPT_INPUT_FILE) --no-trunc Do not truncate long log messages ($GPTSCRIPT_NO_TRUNC) --openai-api-key string OpenAI API KEY ($OPENAI_API_KEY) --openai-base-url string OpenAI base URL ($OPENAI_BASE_URL) From 62e6c9ba6a0f23fa3d10a743a79804b8c68342a4 Mon Sep 17 00:00:00 2001 From: Craig Jellick Date: Thu, 20 Jun 2024 12:56:13 -0700 Subject: [PATCH 3/3] chore: run docs validation on all prs The docs validation was restricted to prs with the docs directory was changed, but this misses the primary usecase where the cli help was updated without the corresponding docs page being updated. Signed-off-by: Craig Jellick Update docs/docs/09-faqs.md Co-authored-by: Nick Hale <4175918+njhale@users.noreply.github.com> Signed-off-by: Craig Jellick --- .github/workflows/validate-docs.yaml | 6 +----- docs/docs/09-faqs.md | 1 + 2 files changed, 2 insertions(+), 5 deletions(-) diff --git a/.github/workflows/validate-docs.yaml b/.github/workflows/validate-docs.yaml index 82ca4bf4..b017af94 100644 --- a/.github/workflows/validate-docs.yaml +++ b/.github/workflows/validate-docs.yaml @@ -1,13 +1,9 @@ name: Validate docs build on: push: - paths: - - docs/** branches: - main pull_request: - paths: - - docs/** branches: - main @@ -23,4 +19,4 @@ jobs: cache: false go-version: "1.22" - run: make init-docs - - run: make validate-docs \ No newline at end of file + - run: make validate-docs diff --git a/docs/docs/09-faqs.md b/docs/docs/09-faqs.md index c26418e9..8e3f91d8 100644 --- a/docs/docs/09-faqs.md +++ b/docs/docs/09-faqs.md @@ -1,6 +1,7 @@ # FAQs ### I don't have Homebrew, how can I install GPTScript? + On MacOS and Linux, you can alternatively install via: `curl https://get.gptscript.ai/install.sh | sh` On all supported systems, you download and install the archive for your platform and architecture from the [releases page](https://github.com/gptscript-ai/gptscript/releases).