We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
With continuous_training on bunny VLM, do we still need to specify vision_tower path?
If we do point to siglip path, will it use that untrained weight instead of vision_tower that comes with bunny vlm I downloaded from huggingface?
What should I specify?
The text was updated successfully, but these errors were encountered:
Yes, it's still needed to specify --vision_tower to the path to huggingface/siglip-so400m-patch14-384.
--vision_tower
But the vision tower of Bunny models would be used because "continuous_training": true in /path/to/merged_model/config.json.
"continuous_training": true
https://github.com/BAAI-DCAI/Bunny?tab=readme-ov-file#continuous--fine-tuning
Sorry, something went wrong.
do it work fine with this model too? https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-vision-preview since it is trained by Bunny method
I think so.
But also check #130.
No branches or pull requests
With continuous_training on bunny VLM, do we still need to specify vision_tower path?
If we do point to siglip path, will it use that untrained weight instead of vision_tower that comes with bunny vlm I downloaded from huggingface?
What should I specify?
The text was updated successfully, but these errors were encountered: