-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Which version of trlx and transformers are you using? #7
Comments
In addition, it appears that the TRLX module used for training does not support the MistralForCausalLM model. |
I have met the same problem. |
This may be a version problem, please return the version of the transformers. |
|
transformers 4.41.2 |
how about the version of sentence_transformers |
I really admire you for completing a great job, however, I also encountered the same problem when I used colab to run your project. I have solved the problem after modifying transformers and sentence-transformer versions, but I have encountered new problems: RuntimeError: ffi_prep_cif_var failed, Could you please tell me all your environment versions?thank you! |
(! pip show freeze) would be helpful,thx! |
No matter whether I load the local model or the gpt2-imdb model from huggingface, the following error is reported:
ValueError: GPTModelBranch does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: https://github.com/huggingface/transformers/issues/28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument
attn_implementation="eager"meanwhile. Example:
model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")This seems to be a problem caused by the version of transformers, but my version has been updated to the latest version.Which version of trlx and transformers are you using?
The text was updated successfully, but these errors were encountered: