From 4e5298ed61706d3bd2b858b0836661aa619c02da Mon Sep 17 00:00:00 2001 From: jianfeifeng Date: Sat, 7 Dec 2019 23:07:34 +0800 Subject: [PATCH] Update README.md --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index f74b7d9c..6e556601 100644 --- a/README.md +++ b/README.md @@ -2,13 +2,13 @@ ## 1 Introduction -Bolt is a light-weight inference framework for mobile devices. Bolt, as a universal deployment platform for all kinds of neural networks, aims to minimize the inference runtime as much as possible. Higher speed, better security and more efficient memory management are the advantages that Bolt strives to provide. +Bolt is a light-weight inference toolbox for mobile devices. Bolt, as a universal deployment tool for all kinds of neural networks, aims to minimize the inference runtime as much as possible. Higher speed, better security and more efficient memory management are the advantages that Bolt strives to provide. ## 2 Features -### 2.1 Supported Frameworks +### 2.1 Supported Deep Learning Platform caffe, onnx, tflite, pytorch (via onnx), tensorflow (via onnx). @@ -215,7 +215,7 @@ As mentioned above, you can get the classification results in 3 steps. ### 5.2 speed -We are working to make Bolt the fastest inference framework. Here we list the single-thread execution time measured on Kirin 810. +Here we list the single-thread execution time measured on Kirin 810. | model\speed | fp16 on A55 | fp16 on A76 | int8 on A55 | int8 on A76 | | ------------ | ----------- | ----------- | ------------- | ------------ |