-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improvement: Increasing Token Usage Efficiency (In Progress) #678
Comments
Checkout our latest X thread outlining some of the upcomming dev priorities related to error fixing and token efficiency, specifically: breaking error loops All landing over this week & next, ETAs for each included in thread! |
📣 Update: We are hosting live office hours on Youtube tomorrow where we will be announcing new features that greatly enhance token efficiency, Bolt.new's overall intelligence, and your ability to intervene and control how the AI modifies your code. I will be posting updates here as well after the live stream, so stay tuned! |
why is toke reload expensiver then monthly price with the same amount tokens? |
@yasinkocak the TLDR is that by doing this, it allows us to bring down the price of tokens for everyone on subscription plans. If you're interested, our CEO gave the more detailed explanation on the bolt.new weekly office hours 2 weeks ago. |
Thank you. I really appreciate the response. But what I'm concerned with now is that I've spent a little bit over $120 on the same project due to a little bit of misunderstanding of how the app works, and then also not being a clear breakdown of how many tokens are used for what or in relation to that, or more tokens used for fixes that there are, job completions. And since it's not clear, like it is inclined, how much token usage I'm using per event, what I would like to know is if I can get a refund or a partial refund for some of the money that I've already spent. I understand that you're going through testing and basically beta right now. But like you're learning, I'm learning as well, and I think we should both give grace for that.
… On Nov 2, 2024, at 6:38 PM, Alexander Berger ***@***.***> wrote:
@yasinkocak <https://github.com/yasinkocak> the TLDR is that by doing this, it allows us to bring down the price of tokens for everyone on subscription plans. If you're interested, our CEO gave the more detailed explanation on the bolt.new weekly office hours <https://www.youtube.com/live/W1mH1aSh6Vo?si=hLaSIO8YtY5g010P&t=578> 2 weeks ago.
—
Reply to this email directly, view it on GitHub <#678 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AR3HI2F5YXQ3H5L6VUMZTF3Z6VO6LAVCNFSM6AAAAABQCRFMTOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJTGIZDKNRWGA>.
You are receiving this because you are subscribed to this thread.
|
@chefbc2k all refund related issues will be handled via [email protected] so please reach out there with a link to your comment, thanks! |
📣 Major Update: in our latest Office Hours session announced a complete overhaul of how the AI will write code to the filesystem in the future. You can watch the clip from our Office Hours session on the new diff-based approach here. You can try using Bolt with the new Diffs approach at this preview URL: https://1f1a1008.bolt-cnr.pages.dev/ Please note: this is experimental, has bugs, and may completely destroy your project, so DO NOT edit projects that are important to you via this URL. Finally, you can checkout last weeks full bolt.new Office Hours session here for more details on what is coming next! |
Hi , same here! I've used a "repair" prompt many times and lost TONS of tokens. Plus many times this was created due to the issue of the code not being entirely finished. What I mean: Basically when you ask bolt to rewrite a file with an error that was targeted, sometimes it rewrites parts of it, in the middle it leaves a: "... same code as before...." text, instead of bringing the full code. So you have to retriger an action to ask it to ask the full file instead of leaving a comment in the middle. Again, waste of a lot of tokens plus sometimes since it rewrites many files at the same time you have to look one by one to see if it actually has gone through this mistake. This is something the Claude model does a lot and have experimented even with the latest update, so even though I am very specific about the prompt now, there definitely needs to be a fix to improve token efficiency and avoid unnecessary bugs from "uncompleted" scripts. |
One our our community members created a helpful guide for prompting best practices that may be helpful with token usage efficiency. You can view the full post in our discord channel: 🚀 Ultimate Guide to bolt.new Prompting 📚 Resources & Tools 🏗️ Project Structure Best Practices
🛠️ Specific Implementation Tips UI Development
Authentication
Database Integration
Challenging/WIP:
⚡ Pro Tips
🎯 Example Full Project Prompt
🔄 If Things Go Wrong
📝 Note: bolt.new is actively developing (< 4 weeks in production). These practices are community-sourced and evolving. For latest updates, check the GitHub repository. Guide created from community discussions in the bolt.new Discord. Last updated: Nov 5, 2024. |
Background
Large language models (LLMs) decode text through tokens—frequent character sequences within text/code. Under the hood Bolt.new is powered mostly by Anthropic's Sonnet 3.5 AI model, so using Bolt consumes tokens that we must purchase from Anthropic.
Our goal is for Bolt to use as few tokens as possible to accomplish each task, and here's why: 1) AI model tokens are one of our largest expenses and if less are used, we save money, 2) so that users can get more done with Bolt and become fans/advocates, and 3) ultimately so we can attract more users and continue investing in improving the platform!
When users interact with Bolt, tokens are consumed in 3 primary ways: chat messages between the user and the LLM, the LLM writing code, and the LLM reading the existing code to capture any changes made by the user.
There are numerous product changes that we are working on to increase token usage efficiency, and in the meantime there are many tips and tricks you can implement in your workflow to be more token efficient:
Upcoming Improvements
Optimizing token usage is a high priority for our team, and we are actively exploring several R&D initiatives aimed at improving token usage efficiency automatically behind the scenes. In the meantime, we will be shipping multiple features in the next 2 weeks that give the user more control over the AI including controlling when it can write code to the filesystem, and which files it is able to modify. These additional controls, paired with the tips below should help you manage your tokens more efficiently. Subscribe to this issue to be notified when those new features land.
While we work on these improvements, here are some strategies you can use to maximize token usage efficiency today:
Avoid Repeated Automated Error "Fix" Attempts
Continuously clicking the automatic "fix" button can lead to unnecessary token consumption. After each attempt, review the result and refine your next request if needed. There are programming challenges that the AI cannot solve automatically, so it is a good idea to do some research and intervene manually if it fails to fix automatically.
Leverage the Rollback Functionality
Use the rollback feature to revert your project to a previous state without consuming tokens. This is essentially and undo button that can take you back to any prior state of your project, This can save time and tokens if something goes wrong with your project. Keep in mind that there is no "redo" function though, so be sure you want to revert before using this feature because it is final: all changes made after the rollback point will be permanently removed.
Crawl, Walk, Run
Make sure the basics of your app are scaffolded before describing the details of more advanced functionality for your site.
Use Specific and Focused Prompts
When prompting the AI, be clear and specific. Direct the model to focus on certain files or functions rather than the entire codebase, which can improve token usage efficiency. This approach is not a magic fix, but anecdotally we've seen evidence that it helps. Some specific prompting strategies that other users have reported as helpful are below, and a ton more can be found in the comment thread below:
Understand Project Size Impact
As your project grows, more tokens are required to keep the AI in sync with your code. Larger projects (and longer chat conversations) demand more resources for the AI to stay aware of the context, so it's important to be mindful of how project size impacts token usage.
Advanced Strategy: Reset the AI Context Window
If the AI seems stuck or unresponsive to commands, consider refreshing the Bolt.new chat page in your browser. This resets the LLM’s context window, clears out prior chat messages, and reloads your code in a fresh chat session. This will clear the chat, so you will need to remind the AI of any context not already captured in the code, but it can help the AI regain focus when it is overwhelmed due to the context window being full.
We appreciate your patience during this beta period and look forward to updating this thread as we ship new functionality and improvements to increase token usage efficiency!
The text was updated successfully, but these errors were encountered: