-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: using the gpt-4o-mini instead of gpt3.5-turbo! #108
Conversation
it's cheaper and have a better performance.
WalkthroughThe changes in this pull request focus on updating the AI models used in the Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
subquery.py (1)
Line range hint
207-211
: Review model choice for question generationWhile GPT-4 is a valid model, using it for question generation might be unnecessarily expensive. The PR's objective mentions cost savings, but this change would increase costs. Consider if GPT-3.5-turbo would be sufficient for question generation, as it's typically adequate for simpler tasks like question decomposition.
Consider these factors:
- Cost: GPT-4 is significantly more expensive than GPT-3.5-turbo
- Performance needs: Is the improved performance of GPT-4 necessary for question generation?
- Consistency: Using different model tiers for different components might lead to inconsistent behavior
If cost optimization is a priority, consider:
question_gen = GuidanceQuestionGenerator.from_defaults( - guidance_llm=OpenAIChat("gpt-4"), + guidance_llm=OpenAIChat("gpt-3.5-turbo"), verbose=False, prompt_template_str=DEFAULT_GUIDANCE_SUB_QUESTION_PROMPT_TMPL, )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
subquery.py
(1 hunks)
🔇 Additional comments (1)
subquery.py (1)
Line range hint 203-211
: Verify changes with integration tests
Given that these model changes affect the core query functionality across multiple data sources, please ensure:
- Integration tests are updated and passing with the new models
- Response quality is validated across different types of queries
- Error handling is tested with various scenarios
- Performance metrics are collected to justify the model changes
it's cheaper and have a better performance.
Summary by CodeRabbit
discourse
platform, ensuring clarity on functionality availability.