-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batch inference #93
Comments
Sorry for that we don't support batch inference currently. You may split the dataset into multiple parts and launch a model on each GPU, like evaluating on VQA, GQA and SEED-Bench. |
You may refer to batch_inference.py. However, we failed to set the |
Close the issue for now if there's no further discussions. Feel free to reopen it if there's any other questions. |
Hi, I observe a significant performance drop with batch inference. The accuracy for mmmu decreased from 41 to 34 when inference with bs 8 in bunny-1.1 4B. I'm wondering if you have any plans for resolving the issue. |
I notice the issue is largely caused by the padding tokens. I would like to recommend a workaround. If the authors could provide a code to group the input questions by their token length during batch inference, the negative impact of padding tokens may be reduced. |
Hi!
I'm evaluating the model on a relatively large dataset (single question, single answer). I was able to fine-tune the Bunny-1.1-Llama-3-8B-V model using one of the scripts provided. What is the best strategy to implement batch inference?
The text was updated successfully, but these errors were encountered: