Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backend Installation Slow Due to Heavy Dependencies Like PyTorch When Using Only OpenAI Endpoints #845

Open
Sinnaeve opened this issue Nov 5, 2024 · 1 comment · May be fixed by #832
Open
Assignees
Labels
duplicate This issue or pull request already exists improvement

Comments

@Sinnaeve
Copy link

Sinnaeve commented Nov 5, 2024

Hi,

On DEV branch, deploying the app locally take a lot of time in particular the backend.
Requirement.txt file from the backend contains packages (sentence-transformers, effdet) that have heavy dependencies such as pytorch for example, but if I use openai LLM and embedding model I'm not sure for what we purpose use those packages.
So my question si two fold:

  • why do we need packages such as pytorch in the backend ?
  • is there a way to install the backend faster, like selecting a version that doesn't require heavy packages if we only want to use endpoints such as openai for LLM and embedding ?
@kartikpersistent kartikpersistent linked a pull request Nov 6, 2024 that will close this issue
@kartikpersistent kartikpersistent added the duplicate This issue or pull request already exists label Nov 6, 2024
@kartikpersistent
Copy link
Collaborator

#620

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists improvement
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants