-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Lock Files That Are Completed #377
Comments
Does not fall on deaf ears! We're working on landing "checkpoints / roll backs" into the chat experience asap, but this is a very novel idea for how to inform the AI on what & what not to edit - will take this back to the team and noodle on it! |
Thanks Eric
Much appreciated as I know you have a lot on right now with the platform popularity
Regards
Carl
…________________________________
From: Eric Simons ***@***.***>
Sent: 10 October 2024 14:09
To: stackblitz/bolt.new ***@***.***>
Cc: Carl Hodges ***@***.***>; Author ***@***.***>
Subject: Re: [stackblitz/bolt.new] Lock Files That Are Completed (Issue #377)
Does not fall on deaf ears! We're working on landing "checkpoints / roll backs" into the chat experience asap, but this is a very novel idea for how to inform the AI on what & what not to edit - will take this back to the team and noodle on it!
—
Reply to this email directly, view it on GitHub<#377 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BL5UVJNK5FRGVW46BL25LUTZ2Z4AXAVCNFSM6AAAAABPWXZNLGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBVGA2TAMRRGA>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
A related idea I've had- adding a "confirm before making changes" mode or something like that. I imagine this as a checkbox next to the chat input, and when its enabled, the AI will never make changes to the code until you explicitly say "yes". The only thing the AI will do is send back proposed changes it thinks it should make, ask for you to either confirm or propose modifications, if the latter it will send back an updated set of changes it will make, etc. Only once you say "yes this looks good, do it" will it go and write those files/etc. This way the AI doesn't just start writing files without first getting confirmation from you that those are the correct things for it to be doing. Curious if you have thoughts on this approach! |
I think its a good idea, however I think that there's a flaw in the idea if I understand you correctly.
If you just had a checkbox to grant permission, I think that's not really controlling the Ai. Especially if its generic permission. It needs to be at a specific file level permission or it will just feel like free for all permission or not checkbox.
I would say that as long as the ai did a quick check and saw files X, Y & Z are locked but needs access to achieve the prompt, then perhaps its smart enough to ask you for permission to edit them. Additionally, if we give permission for it to do that, we need to be able to restore after the prompt has finished as quite a number of times we have to re-prompt to get things as we need. For example: today, I was trying to centralise a drop down mega menu that it created for me, could it do it? Nope! A few dozen times I gave up and it was like a vicious circle of prompt error, test, try again! A restore button and lock button would have helped me.
Another way im finding it would have been useful is that Im creating a cookie widget that runs next to an accessibility widget. The design in the off-canvas that comes in on click keeps changing in the cookies, when im working a completely different file - the accessibility. So yeah.
Heres how I would do it: Prompt, Warn about locks (specific files it needs to edit that are locked), Do its thing, Ability to restore as well as manual ability to lock the files.
So my answer - now ive thought it through, probably best approach is a bit of both ideas.
Hope that helps. Let me know if it makes sense 🙂
Carl
…________________________________
From: Eric Simons ***@***.***>
Sent: 10 October 2024 14:29
To: stackblitz/bolt.new ***@***.***>
Cc: Carl Hodges ***@***.***>; Author ***@***.***>
Subject: Re: [stackblitz/bolt.new] Lock Files That Are Completed (Issue #377)
A related idea I've had- adding a "confirm before making changes" mode or something like that. I imagine this as a checkbox next to the chat input, and when its enabled, the AI will never make changes to the code until you explicitly say "yes". The only thing the AI will do is send back proposed changes it thinks it should make, ask for you to either confirm or propose modifications, if the latter it will send back an updated set of changes it will make, etc. Only once you say "yes this looks good, do it" will it go and write those files/etc. This way the AI doesn't just start writing files without first getting confirmation from you that those are the correct things for it to be doing.
Curious if you have thoughts on this approach!
—
Reply to this email directly, view it on GitHub<#377 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BL5UVJKEO5FFNFZZ4GGM4XTZ2Z6MBAVCNFSM6AAAAABPWXZNLGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBVGA4TQOBVHA>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Love it. Thanks for your thoughts here! One last question- in chat, did you try to instruct the AI to not edit specific files? |
Yes, Multiple times. After many times trying, it said, lets try putting a @locked at the start of each file.
Its suggestion, not mine. So we did. But it went straight over that, despite having that conversation haha. Typical ai.
…________________________________
From: Eric Simons ***@***.***>
Sent: 10 October 2024 14:45
To: stackblitz/bolt.new ***@***.***>
Cc: Carl Hodges ***@***.***>; Author ***@***.***>
Subject: Re: [stackblitz/bolt.new] Lock Files That Are Completed (Issue #377)
Love it. Thanks for your thoughts here!
One last question- in chat, did you try to instruct the AI to not edit specific files?
—
Reply to this email directly, view it on GitHub<#377 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BL5UVJL7SUN2IFZ47UVYMULZ22AHDAVCNFSM6AAAAABPWXZNLGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBVGEZTOMJZGA>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
🤣 ok thank you for confirming!! |
FYI Eric, Another 10million tokens used up today. :-( |
@creativewp24 token reloads have landed (My Subscription are in lower left of the app) and we are getting very close on shipping the file-lock functionality (I saw an early version today at engineering sync!) |
This feature will be announced this week in #678 |
I have a problem whereby I create an element, section, widget or page or even function really...
I create it to a place with bolt that I am happy with it as a final product. So despite saying thank you, lock that file, do not edit or make changes without my permission. Despite discussing with the ai to place: // @locked: This component is finalized. Do not modify without explicit permission. at the top of the file code. For some reason, when I move onto new things, it still tinkers or plays or changes completed things.
Sadly, this is starting to feel like two steps forward one step back. Whilst i still make progress. Its slower than it needs to be as I am re-working the already completed parts. I guess the frustration here would be the amount of tokens this will use up too.
Dont get me wrong. I still love the platform. But this is a frustration for sure!
My idea would be to have a way to 'lock' a file to read only for the ai. Even if this is manually done in the editor. We will still need the tool to read the file. but it will need to ask permission for us to manually unlock it before they change it.
I hope this doesnt fall on death ears, the last support ticket did not. I really think this will make development of larger projects a lot easier saving time and tokens.
The text was updated successfully, but these errors were encountered: