-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to run the lora finetuned model? #35
Comments
now i know it. look at the llava project. you would find the two-stage weight-loading methods. if anyone still don't know, contact me |
Thanks, @dongwhfdyer , I already figured it out. |
hi there , i am trying out this model and the demo worked but when i used the lora.sh script for training it displays OSError: Error no file named pytorch_ Model. bin, tf_ Model. h5, model. ckpt. index or flex_ Model. msgpack found in directory/home/LaVA/lava v1.5-13b lora . can you guide me how can i train this model ? |
@dongwhfdyer hi, In the finetune_lora.sh --pretrain_mm_mlp_adapter path/to/llava-v1.5-mlp2x-336px-pretrain-vicuna-7b-v1.5/mm_projector.bin, l have a issue. Is the mm_projector.bin file using weights from llava-v1.5-7b? I couldn't find mm_projector.bin in Geochat-7B. |
hi @dongwhfdyer , It seems you already successfully reproduced this project. I am still confused about the training procedure.
It would be super nice to get some response from you. Best and have a nice day, |
|
I have followed the instructions of
finetune_lora.sh
and got the trained model.this is my
finetune_lora.sh
here is the saved lora fine_tuned model.
I don't know how to load this model, I didn't find it in
readme.md
. can anyone help me? Thank you!The text was updated successfully, but these errors were encountered: