You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I apologize for the delayed response.
For training models with Vicuna-7B, a GPU with VRAM greater than 24GB is required.
For training models with FlanT5-XL, a GPU with 24GB VRAM is sufficient.
@waitzkin thanks for your great work. I have 4 * A6000 (49G) machine, is it enough to train Vicuna-7B? I'm not sure if it's needed to explicitly split model to different GPU. Thanks for your clarification in advance. What's the memory size of your A100, 80G or 90G? how about is its memory consumption?
No description provided.
The text was updated successfully, but these errors were encountered: