Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question on the FineTuning process of the regular CNN #22

Open
Manikvsin opened this issue Oct 17, 2018 · 2 comments
Open

A question on the FineTuning process of the regular CNN #22

Manikvsin opened this issue Oct 17, 2018 · 2 comments

Comments

@Manikvsin
Copy link

Hi,

I am just trying to recreate the results of the fine-tuned CNN experiment that you showed on the paper, specifically that a a pre-trained CNN with finetuning on the ModelNet40 dataset using 12x views for both testing and training can achieve 88.6% accuracy.
When you say finetuned, did you retrain just the last layer as your number of classes decreased or did you retrain the entire classifier portion of the network?, and if so what parameters did you use when retraining?

I have currently tried both VGG and Alexnet, and only tuning the last layer due to class number changes, I can only get an accuracy of 78% for Alexnet, and 69% for VGG16. I believe it has something to do with my parameters or my understanding of fine-tuning, but any help on this matter would be phenomenal!

Thanks,
Manik

@suhangpro
Copy link
Owner

Yes, we fine-tuned the whole network at the end. We actually use three stages of fine-tuning: last layer only, all fc layers, all layers. The number of epochs for each stage is specified in code as an input parameter.

@Manikvsin
Copy link
Author

Ahh sweet thanks!
Manik

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants