You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
backendRequires work on the backendquestionFurther information is requestedrefactorImprove code qualitytriageNew tickets to be checked by the maintainers
Question: do we want this? I think it will make things easier to follow.
this goes along with #761. Do it before or after, but not at the same time.
Description
summary: move the win condition logic out from deep within the call stack, to encourage a more flat structure, and be better for single responsibility.
Right now, a user sends a message, and to see if that message has caused the level to be completed, we have to look right at the bottom of the call stack: handleChatToGPT(...) => handleLow[orHigher]LevelChat(...) => chatGptSendMessage(...) => getFinalReplyAfterAllToolCalls(...) => performToolCalls(...) => chatGptCallFunction(...) => sendEmail(...) => checkLevelWinCondition(...).
I think we should separate the logic for checking the win condition from the logic that deals with processing the user's message and getting a reply. Something like
functionhandleChatToGPT(...){constreply=handleLow[orHigher]LevelChat(...);// reply object includes a list of sent emailschatReponse.wonLevel=checkLevelWinCondition(reply.sentEmails);res.send(chatResponse);
Which will leave chatGptSendMessage(...) responsible for one fewer thing. Solid.
Acceptance Criteria
Regressions on winning the level:
GIVEN each level
WHEN you send an email that would win the level
THEN the level is won
GIVEN each level
WHEN you try to send an email that would win the level, but the message is blocked by any defence (input or output)
THEN the level is not won
GIVEN each level
WHEN you try to send an email that would win the level, but there is a problem in the openAI API when getting a reply following a tool call*
THEN the level is won
*for example:
(to do this, you will have to mock the openai library throwing an error. See below)
Mocking an OpenAI API error directly after tool call
Go to backend/src/openai.ts. Go to the chatGptChatCompletion method and go to the try/catch statement. At the top of the try block, paste this code:
pmarsh-scottlogic
changed the title
Refactor: Shift the code for checking win condition
Refactor: Shift the logic for checking win condition
Jan 18, 2024
backendRequires work on the backendquestionFurther information is requestedrefactorImprove code qualitytriageNew tickets to be checked by the maintainers
Question: do we want this? I think it will make things easier to follow.
this goes along with #761. Do it before or after, but not at the same time.
Description
summary: move the win condition logic out from deep within the call stack, to encourage a more flat structure, and be better for single responsibility.
Right now, a user sends a message, and to see if that message has caused the level to be completed, we have to look right at the bottom of the call stack:
handleChatToGPT(...)
=>handleLow[orHigher]LevelChat(...)
=>chatGptSendMessage(...)
=>getFinalReplyAfterAllToolCalls(...)
=>performToolCalls(...)
=>chatGptCallFunction(...)
=>sendEmail(...)
=>checkLevelWinCondition(...)
.I think we should separate the logic for checking the win condition from the logic that deals with processing the user's message and getting a reply. Something like
Which will leave
chatGptSendMessage(...)
responsible for one fewer thing. Solid.Acceptance Criteria
Regressions on winning the level:
GIVEN each level
WHEN you send an email that would win the level
THEN the level is won
GIVEN each level
WHEN you try to send an email that would win the level, but the message is blocked by any defence (input or output)
THEN the level is not won
GIVEN each level
WHEN you try to send an email that would win the level, but there is a problem in the openAI API when getting a reply following a tool call*
THEN the level is won
*for example:
(to do this, you will have to mock the openai library throwing an error. See below)
Mocking an OpenAI API error directly after tool call
Go to backend/src/openai.ts. Go to the
chatGptChatCompletion
method and go to thetry/catch
statement. At the top of the try block, paste this code:Now if you want to mock an OpenAI error directly after a tool call, just include "!!" somewhere in your message
The text was updated successfully, but these errors were encountered: