Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pretrain and Finetune template versions #189

Open
xin-li-67 opened this issue Aug 9, 2024 · 1 comment
Open

Pretrain and Finetune template versions #189

xin-li-67 opened this issue Aug 9, 2024 · 1 comment

Comments

@xin-li-67
Copy link

Hi,

I noticed that the --version arg in both the pretrain and finetune scripts is passed with v1, which is different from the original LLaVA&LLaVA-1.5 and other LLaVA style projects. Do you have any ideas on why you chose to do this?

Best,

@Yaxin9Luo
Copy link

Hi, v1 is more like a default setting in the finetune stage of MLLM if you use LLaMA2, as less has trained a more powerful version of llama called vicuna. I am not the authors, hope this can still help you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants