Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🧠 feat: Implement Multi-Model Support and Extended API Integration #651

Conversation

Sunwood-ai-labs
Copy link

Multi-Model Support and Extended API Integration

Overview

This pull request introduces multi-model support and extends our API integration to include additional AI providers. The changes encompass the following major features:

1. Model Selection UI

  • Added a dropdown in the chat interface to allow users to select different AI models.
  • Options include Claude 3.5, GPT-4o, o1-preview, o1-mini, AWS Bedrock Claude, and Gemini (currently disabled).

2. Enhanced API Integration

  • Implemented support for OpenAI and AWS Bedrock APIs alongside our existing Anthropic integration.
  • Added functions to retrieve API keys and credentials for multiple providers.
  • Created new model creation and text streaming functions for OpenAI and AWS Bedrock.

3. Backend Changes

  • Updated the chat API route to handle different model selections and use the appropriate streaming function.
  • Modified the enhancer API to use OpenAI's streaming function.

4. Dependency Updates

  • Added the AWS Bedrock SDK as a new dependency.

Impact

These changes provide a more flexible and extensible architecture for our AI chat application, allowing easy integration of multiple AI models and providers.

- Implement a dropdown to select different AI models
- Update component props to include selectedModel and setSelectedModel
- Add options for Claude 3.5, GPT-4o, o1-preview, o1-mini, AWS Bedrock Claude, and Gemini (disabled)
- Add selectedModel state and pass it to ChatImpl component
- Update useChat hook to include selectedModel in API request body
- Refactor message appending logic to include selectedModel
- Implement getOpenAIAPIKey function for OpenAI API
- Add getAWSCredentials function for AWS Bedrock
- Define MAX_TOKENS_BEDROCK constant with value 4096
- Add getOpenAIModel function for OpenAI models
- Implement getBedrockModel function for AWS Bedrock
- Update imports to include new SDK components
- Implement streamTextOpenAI function for OpenAI models
- Add streamTextBedrock function for AWS Bedrock
- Update imports and type definitions
- Modify chatAction to handle different model selections
- Implement conditional logic to use appropriate streaming function based on selected model
- Update error handling and response headers
- Change import to use streamTextOpenAI instead of streamText
- Add @ai-sdk/amazon-bedrock version ^0.0.30 to dependencies
- Add entries for AWS SDK and related dependencies
@Sunwood-ai-labs Sunwood-ai-labs changed the title 🧠 Implement Multi-Model Support and Extended API Integration 🧠 feat: Implement Multi-Model Support and Extended API Integration Oct 16, 2024
@Sunwood-ai-labs
Copy link
Author

I will remake it again based on #531!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant