-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixed: OverflowError: out of range integral type conversion attempted #2206
base: main
Are you sure you want to change the base?
Conversation
…, And this is fixed using accelerator library
…ed using accelerate library
@qgallouedec Please review my PR, I'm too excited... |
Hey, not having any issue with |
I tried with all latest dependency but faced this issue. |
Can you share your system info? ( |
|
This is the error i got
the error I got
|
I can't reaaly reproduce it since you're using a local model. Do you get the same error with a remote model? |
I tried with local models only, not with cloud models. |
I did, and everything works as expected |
Please share your |
|
|
I tried like:
By running like this:
and by specifying the device the inferencing was too fast |
And this is fixed using accelerator library
What does this PR do?
This PR fixed two issues by implementing the accelerate library, and those issue mainly comes when we have 2 or more GPUs in our system, and it was not handled properly, So I used
Accelerate
from hugging-face here to make it perfect.during executing the commands given below
and
Fixes issue #2205
There are two issues which it fixes:
and
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.