Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low accuracy while searching #22

Open
leoozy opened this issue Apr 27, 2021 · 6 comments
Open

Low accuracy while searching #22

leoozy opened this issue Apr 27, 2021 · 6 comments

Comments

@leoozy
Copy link

leoozy commented Apr 27, 2021

set -x

cifar100

GPU=1
DATASET=cifar100
MODEL=resnet50
EPOCH=20
BATCH=128
LR=0.1
WD=0.0002
AWD=0.0
ALR=0.005
CUTOUT=16
TEMPERATE=0.5

which python
python train_search_paper.py --unrolled --report_freq 1 --num_workers 0 --epoch ${EPOCH} --batch_size ${BATCH} --learning_rate ${LR} --dataset ${DATASET} --model_name ${MODEL} --gpu ${GPU} --arch_weight_decay ${AWD} --arch_learning_rate ${ALR} --weight_decay ${WD} --cutout --cutout_length ${CUTOUT} --temperature ${TEMPERATE}

Hello, I used the reset50 network to search for the augmentation policy. During searching, I noticed that the accuracy for training and validation is very low.

04/27 03:59:04 PM valid 187 2.398671e+00 38.285406 70.545213
04/27 03:59:04 PM valid 188 2.397762e+00 38.289517 70.568783
04/27 03:59:04 PM valid 189 2.397031e+00 38.297697 70.575658
04/27 03:59:04 PM valid 190 2.395328e+00 38.326243 70.590641
04/27 03:59:04 PM valid 191 2.396694e+00 38.309733 70.576986
04/27 03:59:04 PM valid 192 2.396227e+00 38.337921 70.575615
04/27 03:59:05 PM valid 193 2.396536e+00 38.321521 70.574259
04/27 03:59:05 PM valid 194 2.396318e+00 38.325321 70.584936
04/27 03:59:05 PM valid 195 2.395298e+00 38.336000 70.588000
04/27 03:59:05 PM valid_acc 38.336000

Is this Ok?

@xuanloc088
Copy link

I am having the same situation when training with Search_Relax compare to Fast-AA

@latstars
Copy link
Collaborator

Since we only train the model with 20 epochs, it is normal to get a low accuracy when search.

@xuanloc088
Copy link

Thank you very much for your fast response. I am training it with 200 epochs and trying to apply it in my research. Very appreciate for sharing this work with us. @latstars

@xuanloc088
Copy link

Can you give me the setting to get 2.7% error rate like on paper? Many thanks

@latstars
Copy link
Collaborator

latstars commented Oct 14, 2021

To reproduce the training stage, you can try the below script. For a search, use this https://github.com/VDIGPKU/DADA#search. Then you can train the model by using the found policy.

cd fast-autoaugment
sh train.sh
GPUS=0
SEED=0
DATASET=cifar10
CONF=confs/wresnet40x2_cifar10_b512_test.yaml
GENOTYPE=CIFAR10
SAVE=weights/basename ${CONF} .yaml${GENOTYPE}${DATASET}_${SEED}/test.pth
CUDA_VISIBLE_DEVICES=${GPUS} python FastAutoAugment/train.py -c ${CONF} --dataset ${DATASET} --genotype ${GENOTYPE} --save ${SAVE} --seed ${SEED}

@xuanloc088
Copy link

Oh, I get it now. So basically, we first have to run the search_relax model to find the policy first. Then we use the found augmentation policy to train networks in the Fast-Autoaugment folder. Thank you very much @latstars

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants