-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Added QLoRA fine-tuning pipeline #40
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #40 +/- ##
==========================================
+ Coverage 98.80% 98.98% +0.17%
==========================================
Files 6 8 +2
Lines 252 296 +44
==========================================
+ Hits 249 293 +44
Misses 3 3 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some stylistic comments, nothing major here. Everything else looks good and makes sense.
Initial code for model fine-tuning using QLoRA.
This has successfully run on a private A40 GPU (44gb). However, it throws an Out of Memory exception on a 16gb V100 in AzureML, even though it should theoretically fit.
Issue #17