Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark: Layer-wise training of VGG-like network: 0.99% test error #6

Open
anokland opened this issue Jan 21, 2019 · 3 comments
Open
Assignees

Comments

@anokland
Copy link

Please consider adding this result to your list.

A VGG-like network with 6 convolutional layers and 1 fully connected layer.

conv128-conv256-maxpool-conv256-conv512-maxpool-conv512-maxpool-conv512-maxpool-fc1024-fc10

Standard preprocessing (mean/std subtraction/division)
Cutout data augmentation
7.3M parameters
Layer-wise training, no global back-propagation
Code and more results: https://github.com/anokland/local-loss

@tkasasagi
Copy link
Collaborator

Thank you so much. We will check it and update the list soon!

@mxbi
Copy link
Member

mxbi commented Dec 24, 2019

Any chance you could rerun the model on the updated dataset and let us know what it achieves? I'll add it to the table then

@mxbi mxbi self-assigned this Dec 24, 2019
@anokland
Copy link
Author

Sure! With the updated dataset, the error dropped to 0.89%.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants