Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The .pth file saved by the train.py during reproducing for Gramnet seems incorrect for the eval_all.py #19

Open
AnnicePlayer opened this issue Apr 8, 2024 · 0 comments

Comments

@AnnicePlayer
Copy link

Hi, and thanks for your incredible work! I encountered an issue while attempting to replicate the training and testing steps of Gram.
I used the following commands:

  • For training: python train.py --name Gram-Net_test1 --dataroot /root/AIGCdata --detect_method Gram --blur_prob 0.1 --blur_sig 0.0,3.0 --jpg_prob 0.1 --jpg_method cv2,pil --jpg_qual 30,100

  • For evaluation: python eval_all.py --model_path ./checkpoints/Gram-Net_test1/model_epoch_best.pth --detect_method Gram --noise_type blur --blur_sig 1.0 --no_resize --no_crop --batch_size 1

However, I ran into the following error at the beginning of testing: "[ERROR] model.load_state_dict() error".
After reviewing eval_all.py, it turns out that the script encounters a problem with the line of model.load_state_dict(state_dict['netC'], strict=True). Actually, the 'netC' parameters are present in the pretrained weights files ./weights/Gram.pth provided in your project. Yet, the auto-saved model_epoch_best.pth during GramNet's training phrase only generates keys listed as ['model', 'optimizer', 'total_steps'], where 'netC' is absent from the 'model' dictionary.

Is there anything wrong with the commands I used, or is there any modification needed in my opts to resolve this discrepancy?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant