[backport -> release/3.6.x] feat(plugins): ai-prompt-guard-plugin #12337 #12427
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Automated backport to
release/3.6.x
, triggered by a label in #12341.Original description
Summary
This commit offers another plugin that extends the functionality of "AI Proxy" in #12207.
It compares the user's
llm/v1/chat
orllm/v1/completions
request against a series of regular expressions, in two config arrays:If the request matches any regex pattern in deny, the caller is 400'd.
If any allow is specified, by the request matches none of them, the caller is also 400'd.
Engineering design document is available for this feature, but it is quite simple. Comprehensive tests supplied.
This reason for its development, is that many of our users would like to block specific prompts, words, phrases, or otherwise more tightly control how an AI / LLM model is used, if being called via Kong, and this applies especially with the
AI Proxy
plugin that will simplify this process.Checklist
changelog/unreleased/kong
orskip-changelog
label added on PR if changelog is unnecessary. README.md