-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mat shape missing match for Multihead
fine-tune
#615
Comments
Hello, |
Ok, thanks for your reply, looking foward to know updates. |
@MengnanCui Can you test again with the latest main? I should have fixed that. |
Great, Thank you! I will try it! |
Hi, @ilyes319 thank so much for your efforts. (1) I tried the latest main branch, with the same setting as all above, it still outputs this error while finetuning.
(2) On the other hand & for your information, to exclude the effects of mace version.(the
MACE_model_newrun-2024_debug.log Thanks again and hope these info can help. |
Could send your input script, a small sample of your data and your model at [email protected] so I can reproduce that myself. |
Hi, hope the email fine you, the E0s were set inside the script, there is only one element in the datasets "Tungsten" |
by the way, the E0s for pretrained models are set all the same in the input script but from DFTB calculation |
the E0s need to be calculated with the same method as the data you are fitting |
Yes, that's so I have dftb E0s for pretraining on dftb, the dft E0s for finetuning on dft. |
@MengnanCui I should have fixed that in the main branch. Could you try and tell me if it is fixed indeed. |
Descirbe the bug
Hi, I want to do multihead finetuning on personal pre-trained model(
ptbp_model.model
based on version 0.3.7, main branch), after editing these commands--foundation_model='../ptbp_model.model'
--multiheads_finetuning=True
--pt_train_file='../../../transferability7k/training.xyz'
--pt_valid_file='../../../transferability7k/validation.xyz'
I got error messages as the following, do you have ideas about this problem? Do I have to use the latest code to pre-train a model, then fine-tune with multihead approach?
for your information, the pretrained model based on
dftb_
key, then finetuning ondft_
.Here is the log file
MACE_model_run-2747_debug.log
The text was updated successfully, but these errors were encountered: