You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am just trying to recreate the results of the fine-tuned CNN experiment that you showed on the paper, specifically that a a pre-trained CNN with finetuning on the ModelNet40 dataset using 12x views for both testing and training can achieve 88.6% accuracy.
When you say finetuned, did you retrain just the last layer as your number of classes decreased or did you retrain the entire classifier portion of the network?, and if so what parameters did you use when retraining?
I have currently tried both VGG and Alexnet, and only tuning the last layer due to class number changes, I can only get an accuracy of 78% for Alexnet, and 69% for VGG16. I believe it has something to do with my parameters or my understanding of fine-tuning, but any help on this matter would be phenomenal!
Thanks,
Manik
The text was updated successfully, but these errors were encountered:
Yes, we fine-tuned the whole network at the end. We actually use three stages of fine-tuning: last layer only, all fc layers, all layers. The number of epochs for each stage is specified in code as an input parameter.
Hi,
I am just trying to recreate the results of the fine-tuned CNN experiment that you showed on the paper, specifically that a a pre-trained CNN with finetuning on the ModelNet40 dataset using 12x views for both testing and training can achieve 88.6% accuracy.
When you say finetuned, did you retrain just the last layer as your number of classes decreased or did you retrain the entire classifier portion of the network?, and if so what parameters did you use when retraining?
I have currently tried both VGG and Alexnet, and only tuning the last layer due to class number changes, I can only get an accuracy of 78% for Alexnet, and 69% for VGG16. I believe it has something to do with my parameters or my understanding of fine-tuning, but any help on this matter would be phenomenal!
Thanks,
Manik
The text was updated successfully, but these errors were encountered: