Replies: 2 comments
-
This is a common problem and something the devs are still working on. Every command needs code to properly split the output of a command so it doesn't exceed the 4000 tokens limed for the GPT-3.5 model. When this happens you get the error above. One way to limit this crash is to set very detailed goals and ai roles. So that your auto-gpt breaks down your goals into smaller tasks. That way the system has a chance to offload the STM (Short-Term-Memory) that's used to store everything you tell it. |
Beta Was this translation helpful? Give feedback.
-
Instead to tackle that what you can do it. Vectorize all the contexts and pass in the query to that . So now what it can do is fetch only the required information or the similar info. |
Beta Was this translation helpful? Give feedback.
-
AutoGPT bot terminated with the follwing error:
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 5899 tokens. Please reduce the length of the messages.
This error is being thrown by the OpenAI API. It's telling me that the model you're using can only handle a maximum of 4097 tokens in a single request. However, the messages you're trying to process contain 5899 tokens, which exceeds this limit. To resolve this error, you'll need to reduce the length of your messages so that they contain 4097 tokens or fewer.
Beta Was this translation helpful? Give feedback.
All reactions