Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can't get the same augmentation policy(genotype) with the searching code #5

Open
skidsaint opened this issue Aug 20, 2020 · 5 comments

Comments

@skidsaint
Copy link

Thanks for the great work and code!
I want to reproduce the same augmentation policy(called genotype in your code) with the provided searching code. I folllow the description in ReadME.md and search augmentation policy in reduced ImageNet with Res50. However, I found the policy i get is different from the policy you gave in genotype.py, so i want to know whether i do something wrong in repoducing the result. Here are some reasons i guessed may affect the searching results:
1.In the searching code, default random seed is 2 in train_search_paper.py, is this the same random seed you used to get the final result?
2.In searching, i found the augmentations are insert after colorjitter, but in training code, augmentation policy is inserted after RandomHorizontalFlip and before Colorjitter(line 95 in fast-autoaugment/FastAutoAugment/data.py), this is not consistent in training and searching.
Are these two reasons affect the seraching? Or there are some other details i did not found in searching process process?I look forward to your reply,thanks.

@latstars
Copy link
Collaborator

latstars commented Aug 20, 2020

The setting of random seed should use the code search_relax/train_search.py. Since we use random seed, different runs can lead to different searched results. If you want a deterministic results, you can try the code search_relax/train_search.py.

@bchao1
Copy link

bchao1 commented Sep 2, 2020

Hi, does "due to our incorrent implementation for train_search_paper.py" mean the code is in all incorrect or just the random seed part? Is the differentiable augmentation part correct? Thanks!

@latstars
Copy link
Collaborator

latstars commented Sep 5, 2020

This differentiable augmentation part is correct. The random setting part is useless since we apply https://github.com/VDIGPKU/DADA/blob/master/search_relax/train_search_paper.py#L61 before setting the random seed. You can move this line code https://github.com/VDIGPKU/DADA/blob/master/search_relax/train_search_paper.py#L61 to the main function train_search_paper.py#109.

@Ji4chenLi
Copy link

This differentiable augmentation part is correct. The random setting part is useless since we apply https://github.com/VDIGPKU/DADA/blob/master/search_relax/train_search_paper.py#L61 before setting the random seed. You can move this line code https://github.com/VDIGPKU/DADA/blob/master/search_relax/train_search_paper.py#L61 to the main function train_search_paper.py#109.

Hi,
Can you further elaborate this more? If I understand correctly, with the default value of num_policies=105, the set of sub_policies will remain unchanged, i.e. no randomness will be introduced.

In my case, I fail to reproduce the same augmentation policy by searching on reduced-cifar 10. FYI, I am using torch 1.6.0 instead of torch 1.2.0.

@latstars
Copy link
Collaborator

Can you further elaborate this more? If I understand correctly, with the default value of num_policies=105, the set of sub_policies will remain unchanged, i.e. no randomness will be introduced.

The sub-policies is the same but only the order is shuffle. Furthermore, we only validate the code on pytorch 1.2.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants