Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't run simulation "TOKEN LIMIT EXCEEDED" error #141

Open
justtiberio opened this issue Feb 8, 2024 · 9 comments
Open

Can't run simulation "TOKEN LIMIT EXCEEDED" error #141

justtiberio opened this issue Feb 8, 2024 · 9 comments

Comments

@justtiberio
Copy link

justtiberio commented Feb 8, 2024

I installed it all correctly, everything seems to work, but when I try to run it, even with step count to 1,
it just prints "TOKEN LIMIT EXCEEDED". I'm not sure where the problem is, this OpenAI changes something that broke the code? I have a paid API with usage tier 3, so I think it was supposed to work, I tracked the API key the code is using but it seems that It's not even calling for the GPT3, it just makes calls for the "Text-embedding-ada-002-v2", if the OpenAI API tracking is accurate it didn't make any calls for the GPT API.

Anyone was able to run this recently? Anyone has any idea of what could e the cause of this error?

Edit: Also, it always ends in an error before asking to enter an option again:

Today is February 13, 2023. From 00:00AM ~ 00:00AM, Isabella Rodriguez is planning on TOKEN LIMIT EXCEEDED.
In 5 min increments, list the subtasks Isabella does when Isabella is TOKEN LIMIT EXCEEDED from  (total duration in minutes 1440): 
1) Isabella is
TOKEN LIMIT EXCEEDED
TOODOOOOOO
TOKEN LIMIT EXCEEDED
-==- -==- -==- 
TOODOOOOOO
TOKEN LIMIT EXCEEDED
-==- -==- -==- 
Traceback (most recent call last):
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 468, in open_server
    rs.start_server(int_count)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 379, in start_server
    next_tile, pronunciatio, description = persona.move(
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 222, in move
    plan = self.plan(maze, personas, new_day, retrieved)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 148, in plan
    return plan(self, maze, personas, new_day, retrieved)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 959, in plan
    _determine_action(persona, maze)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 573, in _determine_action
    generate_task_decomp(persona, act_desp, act_dura))
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 164, in generate_task_decomp
    return run_gpt_prompt_task_decomp(persona, task, duration)[0]
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 439, in run_gpt_prompt_task_decomp
    output = safe_generate_response(prompt, gpt_param, 5, get_fail_safe(),
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/gpt_structure.py", line 268, in safe_generate_response
    return func_clean_up(curr_gpt_response, prompt=prompt)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 378, in __func_clean_up
    duration = int(k[1].split(",")[0].strip())
IndexError: list index out of range
Error.
Enter option: 
@alextveit
Copy link

I installed it all correctly, everything seems to work, but when I try to run it, even with step count to 1, it just prints "TOKEN LIMIT EXCEEDED". I'm not sure where the problem is, this OpenAI changes something that broke the code? I have a paid API with usage tier 3, so I think it was supposed to work, I tracked the API key the code is using but it seems that It's not even calling for the GPT3, it just makes calls for the "Text-embedding-ada-002-v2", if the OpenAI API tracking is accurate it didn't make any calls for the GPT API.

Anyone was able to run this recently? Anyone has any idea of what could e the cause of this error?

Mine runs for a little while. I will re-run it in the next few days, and can show you how far mine gets. Seems less of an error, and maybe tied to OpenAI limits I may change. But glad to hear I am not the only one with the issue, because I thought there may have been something wrong with my code.

@EdgarHnd
Copy link

EdgarHnd commented Feb 8, 2024

I have the exact same issue

@AurganicSubstance
Copy link

I solved the problem. I will list the solution here and make a PR later (although author seems to be inactive nowadays to review PRs)

The issue is simple, on Jan 4th 2024 OpenAI stopped all models with names such as "text-davinci-00x" (x=123). However, since model names are hard coded in Generative Agents all over the place instead of a single setup file, that means you need to search through the entire project for key word "davinci" and replace all of them to "gpt-3.5-turbo-instruct" (see: https://platform.openai.com/docs/deprecations)

And also a note why this error is hard to spot is because original version assumed that the only error is from token out of bound (hard coded bounds); they did not expect models get replaced. I found this error by replacing the TRY-EXCEPT structure and print the exception message directly instead of printing a hard coded TOKEN LIMIT EXCEEDED msg.

@198gslc
Copy link

198gslc commented May 13, 2024

I have changed all the "text-davinci-00x" to "gpt-3.5-turbo-instruct", but still receive the TOKEN LIMIT EXCEEDED msg.
Why is that?

@Gyanano
Copy link

Gyanano commented Jun 9, 2024

I installed it all correctly, everything seems to work, but when I try to run it, even with step count to 1, it just prints "TOKEN LIMIT EXCEEDED". I'm not sure where the problem is, this OpenAI changes something that broke the code? I have a paid API with usage tier 3, so I think it was supposed to work, I tracked the API key the code is using but it seems that It's not even calling for the GPT3, it just makes calls for the "Text-embedding-ada-002-v2", if the OpenAI API tracking is accurate it didn't make any calls for the GPT API.

Anyone was able to run this recently? Anyone has any idea of what could e the cause of this error?

Edit: Also, it always ends in an error before asking to enter an option again:

Today is February 13, 2023. From 00:00AM ~ 00:00AM, Isabella Rodriguez is planning on TOKEN LIMIT EXCEEDED.
In 5 min increments, list the subtasks Isabella does when Isabella is TOKEN LIMIT EXCEEDED from  (total duration in minutes 1440): 
1) Isabella is
TOKEN LIMIT EXCEEDED
TOODOOOOOO
TOKEN LIMIT EXCEEDED
-==- -==- -==- 
TOODOOOOOO
TOKEN LIMIT EXCEEDED
-==- -==- -==- 
Traceback (most recent call last):
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 468, in open_server
    rs.start_server(int_count)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 379, in start_server
    next_tile, pronunciatio, description = persona.move(
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 222, in move
    plan = self.plan(maze, personas, new_day, retrieved)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 148, in plan
    return plan(self, maze, personas, new_day, retrieved)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 959, in plan
    _determine_action(persona, maze)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 573, in _determine_action
    generate_task_decomp(persona, act_desp, act_dura))
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 164, in generate_task_decomp
    return run_gpt_prompt_task_decomp(persona, task, duration)[0]
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 439, in run_gpt_prompt_task_decomp
    output = safe_generate_response(prompt, gpt_param, 5, get_fail_safe(),
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/gpt_structure.py", line 268, in safe_generate_response
    return func_clean_up(curr_gpt_response, prompt=prompt)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 378, in __func_clean_up
    duration = int(k[1].split(",")[0].strip())
IndexError: list index out of range
Error.
Enter option: 

I also have the same issue, but I did not use OpenAI's large language model. Instead, I use other LLM model that is compatible with the interface style of openai. So I replaced all model names and modified the corresponding base_url, but it still seemed easy to encounter this problem. And I updated the version of OpenAI's library and replaced the corresponding old version interfaces.
At this point, most of the functionality has actually been completed. The only thing is that the model I'm using does not support openai's interface(/v1/completions). So I changed "openai.Completion.create" in "gpt_structure.py" to "openai.chat.completions.create", and modified the parameter "prompt=prompt" to "messages=[{"role": "user", "content": prompt}]". The compatibility issue was resolved, so when trying to run it again, it worked fine!
I hope my debugging experience can help you.

@ovjust
Copy link

ovjust commented Jul 4, 2024

I also have the same issue, but I did not use OpenAI's large language model. Instead, I use other LLM model that is compatible with the interface style of openai. So I replaced all model names and modified the corresponding base_url, but it still seemed easy to encounter this problem. And I updated the version of OpenAI's library and replaced the corresponding old version interfaces.
At this point, most of the functionality has actually been completed. The only thing is that the model I'm using does not support openai's interface(/v1/completions). So I changed "openai.Completion.create" in "gpt_structure.py" to "openai.chat.completions.create", and modified the parameter "prompt=prompt" to "messages=[{"role": "user", "content": prompt}]". The compatibility issue was resolved, so when trying to run it again, it worked fine!

when i use other llm, it comes error:
image

@Softspok
Copy link

Softspok commented Jul 20, 2024

I installed it all correctly, everything seems to work, but when I try to run it, even with step count to 1, it just prints "TOKEN LIMIT EXCEEDED". I'm not sure where the problem is, this OpenAI changes something that broke the code? I have a paid API with usage tier 3, so I think it was supposed to work, I tracked the API key the code is using but it seems that It's not even calling for the GPT3, it just makes calls for the "Text-embedding-ada-002-v2", if the OpenAI API tracking is accurate it didn't make any calls for the GPT API.
Anyone was able to run this recently? Anyone has any idea of what could e the cause of this error?
Edit: Also, it always ends in an error before asking to enter an option again:

Today is February 13, 2023. From 00:00AM ~ 00:00AM, Isabella Rodriguez is planning on TOKEN LIMIT EXCEEDED.
In 5 min increments, list the subtasks Isabella does when Isabella is TOKEN LIMIT EXCEEDED from  (total duration in minutes 1440): 
1) Isabella is
TOKEN LIMIT EXCEEDED
TOODOOOOOO
TOKEN LIMIT EXCEEDED
-==- -==- -==- 
TOODOOOOOO
TOKEN LIMIT EXCEEDED
-==- -==- -==- 
Traceback (most recent call last):
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 468, in open_server
    rs.start_server(int_count)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 379, in start_server
    next_tile, pronunciatio, description = persona.move(
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 222, in move
    plan = self.plan(maze, personas, new_day, retrieved)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 148, in plan
    return plan(self, maze, personas, new_day, retrieved)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 959, in plan
    _determine_action(persona, maze)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 573, in _determine_action
    generate_task_decomp(persona, act_desp, act_dura))
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 164, in generate_task_decomp
    return run_gpt_prompt_task_decomp(persona, task, duration)[0]
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 439, in run_gpt_prompt_task_decomp
    output = safe_generate_response(prompt, gpt_param, 5, get_fail_safe(),
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/gpt_structure.py", line 268, in safe_generate_response
    return func_clean_up(curr_gpt_response, prompt=prompt)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 378, in __func_clean_up
    duration = int(k[1].split(",")[0].strip())
IndexError: list index out of range
Error.
Enter option: 

I also have the same issue, but I did not use OpenAI's large language model. Instead, I use other LLM model that is compatible with the interface style of openai. So I replaced all model names and modified the corresponding base_url, but it still seemed easy to encounter this problem. And I updated the version of OpenAI's library and replaced the corresponding old version interfaces. At this point, most of the functionality has actually been completed. The only thing is that the model I'm using does not support openai's interface(/v1/completions). So I changed "openai.Completion.create" in "gpt_structure.py" to "openai.chat.completions.create", and modified the parameter "prompt=prompt" to "messages=[{"role": "user", "content": prompt}]". The compatibility issue was resolved, so when trying to run it again, it worked fine! I hope my debugging experience can help you.

I tried your method but it didnt work so i looked up the openai document. For people using gpt-3.5-turbo, they just need to change "GPT_request" in "gpt_structure.py", modify the parameter "prompt=prompt" to "messages=[{"role": "user", "content": prompt}]", and dont need to change "openai.Completion.create" to "openai.chat.completions.create". But still very thanks!!!
image

@callmeautumn
Copy link

callmeautumn commented Jul 22, 2024

I changed all "text-davinci-00x" in ""run_gpt_prompt.py"" to "gpt-3.5- turbo-instruct" solved the problem.

@Ycx2000
Copy link

Ycx2000 commented Jan 2, 2025

I'm having the same problem... None of these steps worked...Anyone has more solutions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants