Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overfitting on "iteration" parameters? #36

Open
firrice opened this issue Jun 30, 2024 · 0 comments
Open

Overfitting on "iteration" parameters? #36

firrice opened this issue Jun 30, 2024 · 0 comments

Comments

@firrice
Copy link

firrice commented Jun 30, 2024

Hi, the unify framework for all knids of paired-loss proposed in the paper is great, while i found that it appeared that the best "test recall" has been actually decided by val_dataset, which refenced to the raw code below:
image
According to the fig above, "val datatset" actually also plays a role of "test dataset", which means "test dataset" is visible during training.
So does it seems like choosing a "best train iteration" parameter, which is a risk of overfitting on training hyperparameters?
(I have found similar operation in several other papers, and i knew there was a lack of test dataset building the dataset, such as the general protocal "construct query+gallery based on the raw val+test split in DeepFashion")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant