Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about evaluation metrics #1

Open
Psoyer opened this issue May 7, 2022 · 1 comment
Open

Questions about evaluation metrics #1

Psoyer opened this issue May 7, 2022 · 1 comment

Comments

@Psoyer
Copy link

Psoyer commented May 7, 2022

Thank you for your generous open source.

Now that I have been through the training process, and obtain a model trained for 70 epochs.

So what I wonder is how to output or where I can find these evaluation metrics to measure the performance of my trained model, and if any ground truth data needed for this evaluation process.

Looking forward to your reply and help, much thanks.

@longnhatne
Copy link
Collaborator

longnhatne commented May 8, 2022

Thanks for your appreciation.
I've just found that it was my mistake and already updated the testing script.

Testing command:
python run.py --config experiments/test_multi_CASIA.yml --gpu 0 --num_workers 4

The output images of your trained model will be exported in test_result_dir which is in your testing config file.
For MAD & SIDE, you need ground truth which is already provided in BFM dataset (you can download it by download_synface.sh).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants