forked from meta-llama/llama
-
Notifications
You must be signed in to change notification settings - Fork 106
Issues: tloen/llama-int8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
65B on multiple GPUs : CUDA out of memory with 4 x GPU RTX A5000 (24GB) / 96GB in total
#18
opened Mar 14, 2023 by
scampion
Is it possible to save the smaller weights so it doesn't have to convert them each time?
#10
opened Mar 7, 2023 by
spullara
ProTip!
Exclude everything labeled
bug
with -label:bug.