You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
支持 ARM V8.6 平台新特性 I8MM 实现的 Int8 量化卷积,性能约为 DOT 版本的1.7倍。
MegCC
Highlight
Supports Int8 quantized convolution implemented by I8MM, a new feature of the ARM V8.6 platform, with performance approximately 1.7 times that of the DOT version.
Support clip new model inference, performance is equivalent to megdnn.
Bug Fixes
compiler-kernel
Add NEAREST Mode for resize operator.
Correct the applicable conditions of int8 Winograd F23 to avoid the error of hybrid conv selecting this kernel.
Fix all ConvBias applicability conditions to restrict support to channel broadcast only bias.
Support sqrt sin cos elemwise kernel.
Add float32 calculation implementation of int8 resize to fix the problem of discrepancy between Arm implementation and naive implementation results.
complier-common
Fix the bug that MegCC does not support ConvBias operator without bias.
runtime
Fix the bug that runtime may try to free dynamic tensor with NULL pointer.
New Features
basic components
Update megbrain to the padding channel pass bug fix version to solve the problem of compiling some models of megcc.
complier-kernel
Adds the BatchedMatmul operator for the Arm64 platform.
The IndexingMultiAxisVec operator supports multi-dimensional indexing mode.
Add support for nchw int8 conv1x1x kernel.
Add aarch32 int8 dot nchw conv5x5 kernel.
Add batched matmul operator for Float16 datatype.
Add naive float32 mod elemwise op
Supports Int8 quantized convolution implemented in I8MM, a new feature of the ARM V8.6 platform, with approximately 1.7x the performance of the DOT version.