Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification about EVA-CLIP / EVA-02-CLIP #161

Open
matteot11 opened this issue Jul 8, 2024 · 0 comments
Open

Clarification about EVA-CLIP / EVA-02-CLIP #161

matteot11 opened this issue Jul 8, 2024 · 0 comments

Comments

@matteot11
Copy link

Hello, thanks for releasing this great work!

I am following instructions here for EVA-02 pretraining. It is recommended to download this EVA-CLIP model, which seems to be the first EVA-CLIP version, based on its ~2.2G size (i.e. EVA-01-CLIP, also available from here).

Would pre-training seamlessly work when directly using an EVA-02-CLIP model (which also seems the one used in the EVA-02 paper), or even one of the larger EVA-CLIP-8B or EVA-CLIP-18B?
In that case, how the --teacher_type and --clip_model parameters should be set in the pre-training script for those models?

Thanks you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant