Replies: 2 comments 1 reply
-
Hi @Hydra84034 I have been able to run local models and redirect OpenAI requests. I am using a program called "LM Studio" (https://lmstudio.ai/). With it you can run a local inference server and it accepts the openai formatted requests from AutoGPT. So, all I had to do was install LM Studio, download and locally run the model I wanted, then changed the .env file to redirect the OPENAI_API_BASE_URL to local host, port 1234/v1. `## OPENAI_API_BASE_URL - Custom url for the OpenAI API, useful for connecting to custom backends. No effect if USE_AZURE is true, leave blank to keep the default url the following is an example:OPENAI_API_BASE_URL=http://localhost:443/v1OPENAI_API_BASE_URL=http://localhost:1234/v1` The cool thing is that you can watch the server log and see the model running. Do note that this is VERY intensive. I am running on a GTX 1080ti, Threadripper 2950x, and 64GB of ram and it takes a while. |
Beta Was this translation helpful? Give feedback.
-
I solved it with adding a new block. Solution:
The code is not perfect, so I am not a programmer. But it works well. |
Beta Was this translation helpful? Give feedback.
-
I have a local model running on my computer and I was wondering if I am able to use that to power AutoGPT instead of the OpenAI model. Is this possible? And if so how do I set this up?
Beta Was this translation helpful? Give feedback.
All reactions