-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In what order should I reproduce the paper? #12
Comments
In the official example, the two benchmarks each have their own weights VideoGPT-plus/MBZUAI/VideoGPT-plus_Phi3-mini-4k/mvbench |
step1 So besides the above setp123. and step45, is there any other information or steps I missed? |
|
I didn't use VCGBench_FINETUNING and MVBench_FINETUNING. Will there be any problems? |
Hi @rixejzvdl649, Thank you for your interest in our work and providing the detailed information about your question. The steps you mentioned to reproduce our results (pretraining + finetuning + evaluation) are correct. However, please note that we finetune two models/variants of VideoGPT+. The first variant finetuned using I hope it will help. Please let me know if you have any questions. |
@mmaaz60 hello, does stage1 and stage2 can parallel? |
step1
pretrain_projector_image_encoder.sh
step2
pretrain_projector_video_encoder.sh
step3
finetune_dual_encoder.sh
step4
eval/vcgbench/inference/run_ddp_inference.sh
step5
eval/vcgbench/gpt_evaluation/vcgbench_evaluate.sh
The text was updated successfully, but these errors were encountered: