-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different Embedding Models #8
Comments
Currrently yes - we're shipping support for open source embeddings very soon! |
Hi @Filimoa, I have commented here: #10 (comment), but after that, I found here could be a better place to put this comment: I would like to suggest adding support for Litellm.
I believe integrating Litellm would be a fantastic enhancement because people could choose to switch or use their preferred embedding model api instead of OpenAI's ones only when dealing with semantic processing. Thanks. For example, if they used litellm python client and without self hosting litellm proxy, then their code could be like this (which is very consistent with OpenAI python client format): Reference: https://github.com/BerriAI/litellm if someone self hosted litellm proxy, which they can call LLM API in an OpenAI compatible format via llmlite proxy, you could see the code could be as follows: There are also quite a few projects that used litellm: https://litellm.vercel.app/docs/project to call models from different providers on LiteLLM. |
Your code example seems to imply that.
The text was updated successfully, but these errors were encountered: