We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Two 3060 graphics cards with a total memory of 24GB, why would this error still be reported?
The text was updated successfully, but these errors were encountered:
using watch -n 1 "nvidia-smi",Check if other GPU is used
Sorry, something went wrong.
This is when loading the model
Is the wsl ubuntu we use closely related to this?
try reduce model_max_length to 256, and keep per_device_train_batch_size 1 and per_device_eval_batch_size 1
No branches or pull requests
Two 3060 graphics cards with a total memory of 24GB, why would this error still be reported?
The text was updated successfully, but these errors were encountered: