-
-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to langchain #53
Comments
Hey @Yiannis128 cool repo - curious, why switch to Langchain? was it to support Falcon? |
Hi, short answer: yes. Slightly longer answer is that it will also allow for easily adding other types of LLMs such as Google PaLM that all have different APIs. Langchain makes the issue of designing, implementing, and testing a custom interface go away. The transition is mostly complete, with only a slight set of features left to achieve feature parity with pre-langchain ESBMC-AI. |
Curious - would this have helped - https://github.com/BerriAI/litellm? |
Nice suggestion, I will have to check it out. The current implementation of LangChain is very generic (only uses API calls for completions), so, in theory, I could replace the LangChain support with LiteLLM. I would have to see if:
|
Any specific features you're looking for / problem you're facing with current implementation? Happy to submit a PR to help out here 😊 |
Hello, sorry for the late reply. I am looking for easy interop between the different APIs, along with the following:
Currently, langchain is doing fine, however, please keep me informed :) |
Hey @Yiannis128, No worries. I went through the code and here's what I understand:
How do you expect users to make the call to falcon-7b or starcoder? would they have to deploy it themselves via huggingface inference api before using esbmc? both falcon-7b and starcoder are available pretty easily (1-click deploy) on other providers - e..g Baseten which also offers free credit |
Yeah that's right, as the hugging face API is more generic due to the diverse amount of models it supports.
No need to use one click deploy for some models. The ones built-in to ESBMC-AI make calls to hugging face servers, as they're hosted free of charge. The only thing users need to provide is an API key for hugging face, as stated in the documentation. Larger models as well as private models need to be added through the config as custom AI models that are hosted elsewhere. |
Consider switching to langchain. Need to consider the positives and negatives. This requires some replacement of the backend, specifically in the
BaseChatInterface
class.Requirements
optimize-code
andfix-code
to be OutputParsers. #62fix-code
. #63Use summarization in LangChain for UserChat compress message stack.Handled in Reimplement Conversation Summarizer using LangChain for ChatInterface #57.-r
or--raw-output
arguments to verbose level 2 for extra output, since LangChain uses multiple aggregators of services, the concept of 'raw' output doesn't really apply.The text was updated successfully, but these errors were encountered: