From cca08cde904f8a5527e12462f0778d91ac27224a Mon Sep 17 00:00:00 2001 From: Robin Xiang Date: Thu, 11 Jul 2024 08:43:54 +0800 Subject: [PATCH] fix description about the execution order of the ai-request-transformer plugin (#7628) Fix: fix description about the execution order of the ai-request-transformer. --- app/_hub/kong-inc/ai-request-transformer/overview/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/app/_hub/kong-inc/ai-request-transformer/overview/_index.md b/app/_hub/kong-inc/ai-request-transformer/overview/_index.md index 92b4481d63ba..201efe8e0369 100644 --- a/app/_hub/kong-inc/ai-request-transformer/overview/_index.md +++ b/app/_hub/kong-inc/ai-request-transformer/overview/_index.md @@ -9,7 +9,7 @@ This plugin supports `llm/v1/chat` style requests for all of the same providers It also uses all of the same configuration and tuning parameters as the AI Proxy plugin, under the [`config.llm`](/hub/kong-inc/ai-request-transformer/configuration/#config-llm) block. -The AI Request Transformer plugin runs **after** all of the [AI Prompt](/hub/?search=ai%2520prompt) plugins, but **before** the +The AI Request Transformer plugin runs **before** all of the [AI Prompt](/hub/?search=ai%2520prompt) plugins and the AI Proxy plugin, allowing it to also introspect LLM requests against the same, or a different, LLM. ## How it works