-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Load models from huggingface instead of blob storage #22
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #22 +/- ##
=======================================
Coverage 95.83% 95.83%
=======================================
Files 3 3
Lines 120 120
=======================================
Hits 115 115
Misses 5 5 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the changes here all looks good, the only concern is the usage of Pypi tokens rather than trusted publishing.
I agree with @lsetiawan Everything looks good, just take a look at the PyPI publishing. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #22 +/- ##
=======================================
Coverage 95.83% 95.83%
=======================================
Files 3 3
Lines 120 120
=======================================
Hits 115 115
Misses 5 5 ☔ View full report in Codecov by Sentry. |
During test I found loading models from the HuggingFace hub is faster, likely due to additional optimization/parallel loading vs loading from blob storage mounted as a file system. This always simplifies testing different models since they don't need to be copied over to blob storage first.
Some opportunistic/small changes too.