Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the minimum GPU requirement for training? #1

Open
iamwangyabin opened this issue Dec 14, 2023 · 4 comments
Open

What is the minimum GPU requirement for training? #1

iamwangyabin opened this issue Dec 14, 2023 · 4 comments

Comments

@iamwangyabin
Copy link

No description provided.

@waitzkin
Copy link
Collaborator

Hello, I apologize for the delayed response.
For training models with Vicuna-7B, a GPU with VRAM greater than 24GB is required.
For training models with FlanT5-XL, a GPU with 24GB VRAM is sufficient.

@llv22
Copy link

llv22 commented Feb 20, 2024

@waitzkin thanks for your great work. I have 4 * A6000 (49G) machine, is it enough to train Vicuna-7B? I'm not sure if it's needed to explicitly split model to different GPU. Thanks for your clarification in advance. What's the memory size of your A100, 80G or 90G? how about is its memory consumption?

@waitzkin
Copy link
Collaborator

It was about 40G, so single A1000 machine will be enough to train models with Vicuna-7B.

@llv22
Copy link

llv22 commented Feb 24, 2024

@waitzkin thanks a lot for your response ^-^. Will try

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants