-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finetuneing a pretrained ResNet50_v1d in gluoncv #539
Comments
|
@JWarlock what is the batch size you are using? if batch size per gpu is less than 16 it might cause problems due to batch norm stats being not accurate. Also you need to adjust your learning rate accordingly. |
Hi, there!
Environment:
Ubuntu 18.04, MXNet-cu92 1.3.1, Gluon-Cv 0.4 (Master)
I'm trying to finetuneing a gluoncv pretrained
ResNet50_v1d
classification model. But there is something strange going on.When I finetuned it with 256x256 image resized from some dataset of size 512x512, everything was fine. However, When I tried to finetune it with 384x384 image, the accuracy just wouldn't go up, It kept going up and down.
At first, I thought it has something to do with the mxnet
imread
and mxnetResizeAug
. so I rewrote my own dataset class with cv2imread
and cv2resize
. Now, the accuracy is going up, but at a much lower rate which is no way compared to the rate when using 256x256 image. It usually took about 3 more epochs toget the accuracy up by 0.1% .(the total training epoch is 50)
BTW, below are different resize function that I have tried:
image = cv2.resize(image, (self.size, self.size), cv2.INTER_AREA)
I checked they both use Area-based (resampling using pixel area relation) interpretation strategy for resize.
I wonder if it may have something to do with batch_size since I halved the batch_size when using larger image.
Isn't larger image supposed to give better result?
Thank you!
The text was updated successfully, but these errors were encountered: