-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can it run on ESP-CAM with tensorflow lite? #67
Comments
Hi @manjrekarom , You can try with MobileFacenet-S, which model size will be around 1.4MB when quantized with TFlite with the following cmd: tflite_convert --output_file tf-lite/MobileFacenet_uint8_128.tflite But I met the other issue related to accuracy. |
Hi @mvhsin ! Thanks for your reply. Can I get a link to pretrained MobileFacenet-S? Sure. I will share if I find something relevant. |
Hi @manjrekarom , The author of MobileFacenet mentioned in the paper.
|
Oh thanks I'll take a look. Yes it is for TF-Lite microcontrollers. The FlatBuffer obtained from an initial tf model, is converted to a C byte array. This C file is taking up ~7x space (1.5 -> ~8MB). So we need an even smaller model. The options I feel I am left with are network pruning, knowledge distillation and training an even smaller custom architecture similar to MobileFaceNet. |
Same here! Thanks for the info of TF-lite microcontroller. |
This may be a bit off-topic not sure though. So I am trying to do face recognition on ESP-CAM with 4MB flash.
At the moment the size of weights file for this model is 8MB so I am not able to put it on the device's flash which is only 4MB.
I am thinking about pruning/distilling the model further to reduce its size but I have not done it prior to this.
If possible can you share your ideas how can that be done and if it'll work? My last option is to make a custom network and training it myself. Welcoming other ideas.
Thanks!
The text was updated successfully, but these errors were encountered: