Initial ImageNet pretrained weights for 1.5m, 3m, 5m and 9m variants can now be downloaded from the assets below.
m2 variants:
Method | #Params | ImageNet | ImageNet-Real-Labels |
---|---|---|---|
simplenetv1_9m_m2(36 MB) | 9.5m | 74.23/91.748 | 81.22/94.756 |
simplenetv1_5m_m2(22 MB) | 5.7m | 72.03/90.324 | 79.328/93.714 |
simplenetv1_small_m2_075(12 MB) | 3m | 68.506/88.15 | 76.283/92.02 |
simplenetv1_small_m2_05(5 MB) | 1.5m | 61.67/83.488 | 69.31/ 88.195 |
m1 variants:
Method | #Params | ImageNet | ImageNet-Real-Labels |
---|---|---|---|
simplenetv1_9m_m1(36 MB) | 9.5m | 73.792/91.486 | 81.196/94.512 |
simplenetv1_5m_m1(21 MB) | 5.7m | 71.548/89.94 | 79.076/93.36 |
simplenetv1_small_m1_075(12 MB) | 3m | 67.784/87.718 | 75.448/91.69 |
simplenetv1_small_m1_05(5 MB) | 1.5m | 61.122/82.988 | 68.58/87.64 |
Note 1
These models are converted from their Pytorch counterparts through onnx runtime.
The respective models can be accessed from our Official Pytorch repository.
Note 2
Please note that since models are converted from onnx to caffe, the mean, std and crop ratio used are as follows:
DEFAULT_CROP_PCT = 0.875
IMAGENET_DEFAULT_MEAN = (0.485, 0.456, 0.406)
IMAGENET_DEFAULT_STD = (0.229, 0.224, 0.225)
Also note that images were not channel swapped during training so you dont need to do any channel swap either.
You also DO NOT need to rescale the input to [0-255].