Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add function calling support for models that support it. #3

Open
5 tasks done
vkameswaran opened this issue Apr 24, 2024 · 1 comment
Open
5 tasks done

Add function calling support for models that support it. #3

vkameswaran opened this issue Apr 24, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@vkameswaran
Copy link

⚠️ Please check that this feature request hasn't been suggested before.

  • I searched previous Ideas in Discussions didn't find any similar feature requests.
  • I searched previous Issues didn't find any similar feature requests.

🔖 Feature description

I want us to use the LiteLLM function calling spec for instead of our hybrid GPT approach for models that support function calling. Cohere, Anthropic, Fireworks etc. etc.

Acknowledgements

  • My issue title is concise and descriptive.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.
@vkameswaran vkameswaran added the enhancement New feature or request label Apr 24, 2024
Copy link

To implement function calling support for models like LiteLLM, Cohere, Anthropic, and Fireworks, you'll need to adapt the existing infrastructure for LLM integration and custom tool creation. Here's a high-level approach based on the current codebase:

  1. Extend LLM Provider Support: In libs/superagent/app/agents/langchain.py, add new classes for each of the new LLM providers that support function calling. Use ChatOpenAI and AzureChatOpenAI as references for how to structure these classes.

  2. Adapt LLM Initialization: Modify the _get_llm method to initialize the correct LLM provider class based on the provider attribute of the LLM model. This might involve checking for support of function calling and choosing the appropriate class.

  3. Custom Tool Integration: Utilize the guide in /fern/mdx/sdk/local_tools.mdx for creating custom tools. For models supporting function calling, ensure that the metadata for tools includes necessary information for function invocation.

  4. Invoke Function Calls: In the agent's invocation logic, specifically in the method where the agent processes input and decides on actions, include logic to detect when a function call should be made to the LLM and execute it accordingly.

This approach leverages the existing infrastructure for LLM integration and custom tool functionality, extending it to accommodate the specific requirements of function calling with the mentioned models.

References

greptileai/superagent/fern/mdx/sdk/local_tools.mdx
greptileai/superagent/libs/superagent/app/agents/langchain.py

Ask Greptile

@greptileai greptileai deleted a comment from greptile-apps bot Apr 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant