You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
⚠️ Please check that this feature request hasn't been suggested before.
I searched previous Ideas in Discussions didn't find any similar feature requests.
I searched previous Issues didn't find any similar feature requests.
🔖 Feature description
I want us to use the LiteLLM function calling spec for instead of our hybrid GPT approach for models that support function calling. Cohere, Anthropic, Fireworks etc. etc.
Acknowledgements
My issue title is concise and descriptive.
I have searched the existing issues to make sure this feature has not been requested yet.
I have provided enough information for the maintainers to understand and evaluate this request.
The text was updated successfully, but these errors were encountered:
To implement function calling support for models like LiteLLM, Cohere, Anthropic, and Fireworks, you'll need to adapt the existing infrastructure for LLM integration and custom tool creation. Here's a high-level approach based on the current codebase:
Extend LLM Provider Support: In libs/superagent/app/agents/langchain.py, add new classes for each of the new LLM providers that support function calling. Use ChatOpenAI and AzureChatOpenAI as references for how to structure these classes.
Adapt LLM Initialization: Modify the _get_llm method to initialize the correct LLM provider class based on the provider attribute of the LLM model. This might involve checking for support of function calling and choosing the appropriate class.
Custom Tool Integration: Utilize the guide in /fern/mdx/sdk/local_tools.mdx for creating custom tools. For models supporting function calling, ensure that the metadata for tools includes necessary information for function invocation.
Invoke Function Calls: In the agent's invocation logic, specifically in the method where the agent processes input and decides on actions, include logic to detect when a function call should be made to the LLM and execute it accordingly.
This approach leverages the existing infrastructure for LLM integration and custom tool functionality, extending it to accommodate the specific requirements of function calling with the mentioned models.
🔖 Feature description
I want us to use the LiteLLM function calling spec for instead of our hybrid GPT approach for models that support function calling. Cohere, Anthropic, Fireworks etc. etc.
Acknowledgements
The text was updated successfully, but these errors were encountered: