-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release Upper Limit on Torch, Transformers, Accelerate, and Others #17
Comments
we found out there is really no need to upper bound this torch dependency, as for us we are getting stuck only beause of this commit pytorch/pytorch#121635.
Also for transformers, we just have to be weary of the sliding window mask issue of SPDA, and keep track of it to see when it will be fixed. |
Signed-off-by: Yu Chin Fabian Lim <[email protected]>
FMS has fixed the TRL issue foundation-model-stack/fms-hf-tuning#213 |
* add build and publish and bump version 0.1.0 Signed-off-by: Yu Chin Fabian Lim <[email protected]> * released torch and trl upper bounds #17 Signed-off-by: Yu Chin Fabian Lim <[email protected]> * update Signed-off-by: Yu Chin Fabian Lim <[email protected]> * undo version bump and minor edit Signed-off-by: Yu Chin Fabian Lim <[email protected]> --------- Signed-off-by: Yu Chin Fabian Lim <[email protected]>
I think we need a lower limit on the Update: we also upper bound to |
GPTQModel has a lower bound on transformers here but framework is bounded at |
We have removed the upper bound on transformers in #58 |
Currently the
torch
dependency inframework
is upper bounded as"< 2.3"
, however inaccelerate
versions has problems supporting torch2.2
. The latestnumpy
versions (>=2.0) also has incompatibilities with the current torch version and is bounded here in #42. Hence, we should consider releasing the upper bound soon.Also can consider releasing the upper limit on
transformers
andaccelerate
The text was updated successfully, but these errors were encountered: