You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For load testing with k6, we have the potential to spend a lot of money on OpenAI access while we work out how best to configure and run the tests.
It would be an idea to add a (temporary) endpoint for testing to the backend, that we can hit instead of posting actual chat requests. This endpoint should ideally do something that involves computation, maybe using some memory, before returning the response, so that we can simulate a request that does not simply respond immediately (like the healthcheck endpoint).
I suggest we use POST /test/load as our endpoint, to make it explicit.
Note that there is likely to be value in hitting openai with chats for real once or twice, once we have configured and tweaked our tests using the dummy endpoint. In fact, there could also be value in leaving this test endpoint in place.
For load testing with k6, we have the potential to spend a lot of money on OpenAI access while we work out how best to configure and run the tests.
It would be an idea to add a (temporary) endpoint for testing to the backend, that we can hit instead of posting actual chat requests. This endpoint should ideally do something that involves computation, maybe using some memory, before returning the response, so that we can simulate a request that does not simply respond immediately (like the healthcheck endpoint).
I suggest we use
POST /test/load
as our endpoint, to make it explicit.Note that there is likely to be value in hitting openai with chats for real once or twice, once we have configured and tweaked our tests using the dummy endpoint. In fact, there could also be value in leaving this test endpoint in place.
FYI @kclark-scottlogic
The text was updated successfully, but these errors were encountered: