Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different Embedding Models #8

Open
gvlx opened this issue Apr 8, 2024 · 2 comments
Open

Different Embedding Models #8

gvlx opened this issue Apr 8, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@gvlx
Copy link

gvlx commented Apr 8, 2024

Your code example seems to imply that.

@Filimoa
Copy link
Owner

Filimoa commented Apr 8, 2024

Currrently yes - we're shipping support for open source embeddings very soon!

@Filimoa Filimoa added the enhancement New feature or request label Apr 8, 2024
@Filimoa Filimoa changed the title Is an OpenAI key required? Open Source Embedding Models Apr 8, 2024
@Filimoa Filimoa changed the title Open Source Embedding Models Different Embedding Models Apr 15, 2024
@tan-yong-sheng
Copy link

tan-yong-sheng commented Jul 8, 2024

Hi @Filimoa,

I have commented here: #10 (comment), but after that, I found here could be a better place to put this comment:

I would like to suggest adding support for Litellm.

Litellm is an open source project, which unifies the API call of 100+ LLMs (including Anthropic, Cohere, and Ollama, etc) in an OpenAI compatible format: https://github.com/BerriAI/litellm

I believe integrating Litellm would be a fantastic enhancement because people could choose to switch or use their preferred embedding model api instead of OpenAI's ones only when dealing with semantic processing. Thanks.

For example, if they used litellm python client and without self hosting litellm proxy, then their code could be like this (which is very consistent with OpenAI python client format):

image

Reference: https://github.com/BerriAI/litellm

if someone self hosted litellm proxy, which they can call LLM API in an OpenAI compatible format via llmlite proxy, you could see the code could be as follows:

image

Reference: https://litellm.vercel.app/docs/providers/azure_ai#passing-additional-params---max_tokens-temperature

There are also quite a few projects that used litellm: https://litellm.vercel.app/docs/project to call models from different providers on LiteLLM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants