Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request/Question] Allow to use LLMs locally #167

Open
abruno- opened this issue May 22, 2024 · 1 comment
Open

[Feature Request/Question] Allow to use LLMs locally #167

abruno- opened this issue May 22, 2024 · 1 comment

Comments

@abruno-
Copy link

abruno- commented May 22, 2024

Hi, I discovered your plugin yesterday and found it quite interesting.

I found out that I can start a local server with AI models with lmstudio.ai, after setting that up, I wanted to use it from my Android Studio IDE, and searching found your plugin, I thought it would be posible to config to connect to localhost instead of openai/google/etc

[Question]
Is there a way to configure the plugin to use a local LLM on my localhost?

I tried overwriting some of the pre-configured service APIs (OpenAI, Google, etc.), but it seems everything is read-only. When I change the endpoint of a preset option, apply the changes or save, and reopen the plugin settings, the modifications are reverted.

Also tried all the config in the "developer mode", but it seems I'm doing something wrong as everything I config doesn't seem to change the results.

I wasn't able to connect to the server.

[Posible bug] I also tried using the "EDIT REQUEST" option beforehand, but it doesn't retain the configuration I set in the "plugin config". For example, I set the model to "lalala1" in the plugin config, but when the "Edit request JSON" screens appears, all the json properties are loaded with different configurations (model, temperature, etc).

[Request]
It would be great if the JSON configuration was visible and editable from the plugin config, allowing us to add our own "rules/requests" to the model initially, so we don’t need to edit each request manually.

[Request]
Lastly, is it possible to have everything operate within the IDE, similar to "Gemini" or "GitHub Copilot" plugins?

Thank you for reading and for sharing your plugin!

@abruno- abruno- changed the title [Feature Request/Question] Allow to use localhost [Feature Request/Question] Allow to use LLMs locally May 23, 2024
@acharneski
Copy link
Member

The base urls should be configurable. That is a rarely used feature, though, so I'll give it some testing.
The problem with using additional LLM providers is that the APIs differ in some minor aspects. I'd probably need to treat Ollama support as a separate feature. It's been on my todo list for awhile.

This plugin has actually evolved away from being overly integrated with the IDE. This is because the components are used in several other project, e.g. apps.simiacrypt.us

Keeping this open to track the above

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants