Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't wait for LLM evaluation #333

Closed
gsproston-scottlogic opened this issue Sep 29, 2023 · 2 comments · Fixed by #375
Closed

Don't wait for LLM evaluation #333

gsproston-scottlogic opened this issue Sep 29, 2023 · 2 comments · Fixed by #375
Labels
backend Requires work on the backend enhancement New feature or request

Comments

@gsproston-scottlogic
Copy link
Contributor

gsproston-scottlogic commented Sep 29, 2023

When the user sends a message to the backend, the evaluator LLM is used to check if the user's message is malicious in any way.
Like other LLM calls, this takes a while (maybe ~5s), and the backend waits for this to complete before proceeding.
We could possibly set this evaluator call going in the background while the backend continues as if the message is safe.
At some point the evaluator call will resolve, at that point we can decide if we want to block the message or not.

Some considerations:

  • Do not send the reply until the evaluator call has resolved.
  • The user's session is modified during message processing. We shouldn't do this until the evaluator has confirmed the message is ok.

Info on implementation in linked PR
Acceptance criteria? Smoke test on working ok, defences turned on, messaged block etc.

@gsproston-scottlogic gsproston-scottlogic added enhancement New feature or request backend Requires work on the backend labels Sep 29, 2023
@gsproston-scottlogic gsproston-scottlogic added this to the Nice to have milestone Sep 29, 2023
@gsproston-scottlogic
Copy link
Contributor Author

reopening as this skipped testing

@asaha-scottlogic
Copy link

Image

Image

tested a few examples, attaching the simplest one, happy with the fact that backend does not wait or stop completely to check evaluator llm result but proceeds and goes on speaking to actual model and even QA llm if required but yes does not post a reply until it has got a 'NO' from evaluator llm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend Requires work on the backend enhancement New feature or request
Projects
None yet
3 participants