Skip to content
View mapnn's full-sized avatar

Block or report mapnn

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
mapnn/README.md

mapnn

License Build Status

1. Introduction

mapnn is designed to combine the strengths of different high-performance neutral network, such as ncnn, MNN or Tengine. With this framework, one can easily test the performance of each kernel from different inference frameworks. Of cause, this framework provide a simple method that map some operator(defined by training) to kernel(defined by inference). So that user given the ability to choose the best kernel for inference task throught the map.

2. Support most commonly used CNN network

  • caffe: alexnet, googlenet inceptionv3/v4 mobilenetv1/v2, resnet, vgg16
  • onnx : alexnet, googlenet inceptionv1/v2 mobilenetv2, resnet, vgg16, shufflenet1.1, yolov2

3. Build status matrix

System armv7 armv8 x86 amd64
Ubuntu(GCC) Build Status
Ubuntu(Clang) Build Status
Linux Build Status Build Status
Windows(MSVC) Build Status Build Status
Android Build Status Build Status Build Status Build Status
MacOS(Clang) Build Status

4. How to build

5. How to test kernel

According to the step 4, Compiling and install this library with your toolchain. And copy(if need) the layer_test to your platform. Then run ther layer_test to get perf.txt.

./layer_test conv -h
# Usage: layer_test conv -k 3 -s 3 -c 1 -g 2
# Option:
#       -k kernel INT
#       -s stride INT
#       -c cycle  INT
#       -g gap    INT
#       -o output INT
#       -h this help

The following show some kernels benchmark(up picture) and corresponding accuracy(down picture).

  • raspebery pi3B+ (right for conv3x3s1, left for conv3x3s2)

  • SDM855 arm32 (right for conv3x3s1, left for conv3x3s2)

  • SDM855 arm64 (right for conv3x3s1, left for conv3x3s2)

  • intel 10700k x86-64 (right for conv3x3s1, left for conv3x3s2)

node: This benchmark not means benchmark of different CNN frameworks.

6. How to use the API

int ret; 
mapnn::Net* net = new mapnn::Net(); // new object
ret = net->load("model_path.onnx"); // load model
//ret = net->load("model_path.proto", "model_path.caffemodel");
ret = net->prepare(3, 224, 224);    // prepare net
ret = net->inference(float_data, 3, 224, 224); // inference
Tensor& output = net->getTensor("output_name");
delete net;

7. How to contribute

  • Add new operation from other training frameworks.
  • Add new kernel from other inference frameworks.
  • Improve frameworks and do good pull Request.
  • Fix and Report the issue on the Github issues page.
  • Star or fork this project.

Popular repositories Loading

  1. mapnn mapnn Public

    C++ 1

  2. ncnn ncnn Public

    Forked from Tencent/ncnn

    ncnn is a high-performance neural network inference framework optimized for the mobile platform

    C++

  3. MNN MNN Public

    Forked from alibaba/MNN

    MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba

    C++

  4. Tengine Tengine Public

    Forked from OAID/Tengine

    Tengine is a lite, high performance, modular inference engine for embedded device

    C