-
-
Notifications
You must be signed in to change notification settings - Fork 408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there any practical usecase of this project ? #194
Comments
Is there any practical reason to waste time asking such question ? This is great work, valued by at least 5k people, so please remove any assumption about others in your sentence, and keep both your brain behaviour and belief yours. |
yes Mister there is practical reason to ask for saving atleast others time
I Haven't seen its 'PRACTICAL' usecase anywhere Mister so rather than assuming people are using it and project is growing which is not the case by any mean, People gave it a star including myself because it did show a unique way (for me and maybe for others atleast to run big models which is still nice but it is by no mean practical to use big models like that with extremelyyyy delayed response so before speaking find proof's which can support your statement and share it here rather than 'assuming' things. I by no mean am disrespecting their work , i am just asking if there is any practical use of it because that's what matters the most at the end of the day. |
Asking this because you and i both know very well that even if it is possible to run big models with lower vram usage still it is not practical to use the models like that because your brain cells will get damaged with that much slow inference speed so is there any practical usecase of this project where we can get any better advantage when running small models (quantized models) or anything else which might be beneficial for us ?
The text was updated successfully, but these errors were encountered: