[LLM] Where to run the LLaMa Notebook? #1312
Unanswered
mahimairaja
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Ideal Place to Quantize the LLM to IR
I am impressed by the results produced by quantized IR version of the LLaMa-2. And I tried to run the 254-llm-chatbot.ipynb notebook using the free version
T4 GPUs
fromColab
,Kaggle
but I am facing a out of memory issue and the session crashes. What would be the idea place to run this notebook? And How much memory, cpu, gpu would be required?Beta Was this translation helpful? Give feedback.
All reactions