Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dummy endpoint for load testing #830

Closed
chriswilty opened this issue Feb 8, 2024 · 0 comments · Fixed by #832
Closed

Dummy endpoint for load testing #830

chriswilty opened this issue Feb 8, 2024 · 0 comments · Fixed by #832
Assignees
Labels
backend Requires work on the backend testing Related to testing

Comments

@chriswilty
Copy link
Member

For load testing with k6, we have the potential to spend a lot of money on OpenAI access while we work out how best to configure and run the tests.

It would be an idea to add a (temporary) endpoint for testing to the backend, that we can hit instead of posting actual chat requests. This endpoint should ideally do something that involves computation, maybe using some memory, before returning the response, so that we can simulate a request that does not simply respond immediately (like the healthcheck endpoint).

I suggest we use POST /test/load as our endpoint, to make it explicit.

Note that there is likely to be value in hitting openai with chats for real once or twice, once we have configured and tweaked our tests using the dummy endpoint. In fact, there could also be value in leaving this test endpoint in place.

FYI @kclark-scottlogic

@chriswilty chriswilty added testing Related to testing backend Requires work on the backend labels Feb 8, 2024
@kclark-scottlogic kclark-scottlogic self-assigned this Feb 12, 2024
@kclark-scottlogic kclark-scottlogic linked a pull request Feb 12, 2024 that will close this issue
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend Requires work on the backend testing Related to testing
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants