Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM integration #304

Open
ekg opened this issue Dec 12, 2024 · 0 comments
Open

LLM integration #304

ekg opened this issue Dec 12, 2024 · 0 comments

Comments

@ekg
Copy link

ekg commented Dec 12, 2024

Sometimes it's nice to speak to describe the code that you want and the changes you want made. I'd like to make it possible to feed the contents of a pane through a LLM to transform or patch them.

In the UI, I want a button to click (potentially placed by the name of the programming language) that divides the pane in half. The bottom portion should contain the code in question. The top should contain a chat window for interaction with a LLM that proposes changes to the code using an aider.chat-type approach. This involves sending diffs in the form of "SEARCH/REPLACE" blocks that define how the code is changed. Doing this with full detail may require future issues, but this is what we want to work towards.

The system will need to monitor execution context and incorporate errors into the chat interaction. These errors should effectively be copied into the chat box, as long as they don't become too profuse. For the initial version, we'll simply paste the current state of the code before the user prompt in the chat. All of this interaction should be visible in the log for reference.

A system prompt will explain to the model the context in which it is operating and the behavior expected of it. This should be configurable for each language type, and locally editable in each pane via /commands in the chat.

This issue will be considered complete when:

  1. Each pane can be manipulated by a chat with an LLM
  2. The pane's code is seamlessly updated according to the output of the LLM
  3. The LLM sees any errors that occur and they are appropriately fed into chat
  4. System prompts are configurable per language
  5. Local prompt customization works through chat commands
  6. All interactions are logged and traceable
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant