Tokens usage possibly high (Openai 429 error) #53
Replies: 1 comment 1 reply
-
Thanks! This really just depends on how many tokens you've loaded into context and how long your conversation is. It's definitely possible to use quite a lot if you have a large context and a long conversation. What does it look like if you run One options for using fewer tokens is to use the I'm thinking it may be useful to write up a guide explaining the different calls Plandex is making to the api so people can better understand and project token usage. I think it's actually pretty economical compared to a lot of other tools, especially agent-based tools, in that it gives you fine-grained control and doesn't try to ingest your whole codebase. But it also gives you freedom to load a lot into context if you want to, so it all depends on how you're using it. I hope that helps! |
Beta Was this translation helpful? Give feedback.
-
Hey there, awesome project 👍
I hit a 429 error on the openai API and wasn't sure if this was to be expected based off my usage. I used plandex to load to add a .cc and .h file (<500 lines total) for context. My goal was to convert these files into rust. It seems like I hit 57 requests and 629,169 tokens from gpt-4-turbo. Is this to be expected? It seems a bit high but I'm not too familiar with the api usage. Plandex didn't fully translate the code and left many functions as unimplemented so I wanted to run it again but it already cost me about 10 bucks via openai.
My question might be more general misunderstanding/confusion as to how all the pieces work, so if everything is as expected, is it possible to add a bit more documentation on this matter.
Beta Was this translation helpful? Give feedback.
All reactions