You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Feature Request: Add Patch-Based Inference Support (Inspired by MCUNetV2)
Problem Statement
TensorFlow Lite Micro (TFLM) currently lacks support for patch-based inference, as introduced in MCUNetV2. This technique processes input images in smaller patches sequentially, reducing peak memory usage, thus enabling inference on higher resolution images on resource-constrained devices like microcontrollers.
Feature Request: Add Patch-Based Inference Support (Inspired by MCUNetV2)
Problem Statement
TensorFlow Lite Micro (TFLM) currently lacks support for patch-based inference, as introduced in MCUNetV2. This technique processes input images in smaller patches sequentially, reducing peak memory usage, thus enabling inference on higher resolution images on resource-constrained devices like microcontrollers.
References
The text was updated successfully, but these errors were encountered: