Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama support for LLM backend #97

Open
rchan26 opened this issue Sep 9, 2024 · 4 comments
Open

Ollama support for LLM backend #97

rchan26 opened this issue Sep 9, 2024 · 4 comments

Comments

@rchan26
Copy link
Contributor

rchan26 commented Sep 9, 2024

Somewhat related to #65

In the past I've used Ollama for local inference for LLMs, would it be useful to add this support to the library? I'd be happy to work on this to add support to use an Ollama API endpoint for the LLM

@andimarafioti
Copy link
Member

can ollama use APIs? There is a PR open for the API part.

@rchan26
Copy link
Contributor Author

rchan26 commented Sep 9, 2024

@andimarafioti I think the PR is only for OpenAI API. I was just suggesting to offer Ollama API support. In particular, it's possible to run a Ollama server (either locally on a slightly better machine) and then query it using the Ollama Rest API (like here).

The changes/additions would be similar to the open PR #81 but a class for Ollama API support and querying Ollama endpoints

@mattfro
Copy link

mattfro commented Oct 21, 2024

This would be great, when some people already run ollama. Like me :)

@rchan26
Copy link
Contributor Author

rchan26 commented Oct 21, 2024

happy to work on this at some point this week. there's added functionality to Ollama last week to run any GGUF model from the HF Hub too: https://huggingface.co/docs/hub/en/ollama

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants