You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that the --version arg in both the pretrain and finetune scripts is passed with v1, which is different from the original LLaVA&LLaVA-1.5 and other LLaVA style projects. Do you have any ideas on why you chose to do this?
Best,
The text was updated successfully, but these errors were encountered:
Hi, v1 is more like a default setting in the finetune stage of MLLM if you use LLaMA2, as less has trained a more powerful version of llama called vicuna. I am not the authors, hope this can still help you.
Hi,
I noticed that the
--version
arg in both the pretrain and finetune scripts is passed with v1, which is different from the original LLaVA&LLaVA-1.5 and other LLaVA style projects. Do you have any ideas on why you chose to do this?Best,
The text was updated successfully, but these errors were encountered: