-
Notifications
You must be signed in to change notification settings - Fork 46
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
nullptr
committed
Jul 10, 2023
1 parent
ed4ed61
commit 743bfcb
Showing
168 changed files
with
4,557 additions
and
4,670 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,69 +1,64 @@ | ||
# Seeed Studio EdgeLab | ||
|
||
<div align="center"> | ||
<img width="100%" src="docs/public/images/EdgeLab-Logo.png"> | ||
<h3> <a href="https://seeed-studio.github.io/EdgeLab/"> Documentation </a> | <a href="https://github.com/Seeed-Studio/edgelab-model-zoo"> Model Zoo </a> </h3> | ||
</div> | ||
|
||
English | [简体中文](README_zh-CN.md) | ||
|
||
|
||
## Introduction | ||
|
||
Seeed Studio EdgeLab is an open-source project focused on embedded AI. We have optimized excellent algorithms from [OpenMMLab](https://github.com/open-mmlab) for real-world scenarios and made implemention more user-friendly, achieving faster and more accurate inference on embedded devices. | ||
|
||
|
||
## What's included | ||
|
||
Currently we support the following directions of algorithms: | ||
|
||
<details> | ||
<summary>Anomaly Detection (coming soon)</summary> | ||
In the real world, anomalous data is often difficult to identify, and even if it can be identified, it requires a very high cost. The anomaly detection algorithm collects normal data in a low-cost way, and anything outside normal data is considered anomalous. | ||
</details> | ||
|
||
<details> | ||
<summary>Computer Vision</summary> | ||
Here we provide a number of computer vision algorithms such as object detection, image classification, image segmentation and pose estimation. However, these algorithms cannot run on low-cost hardware. EdgeLab optimizes these computer vision algorithms to achieve good running speed and accuracy in low-end devices. | ||
</details> | ||
|
||
<details> | ||
<summary>Scenario Specific</summary> | ||
EdgeLab provides customized scenarios for specific production environments, such as identification of analog instruments, traditional digital meters, and audio classification. | ||
</details> | ||
|
||
<br> | ||
|
||
We will keep adding more algorithms in the future. Stay tuned! | ||
|
||
|
||
## Features | ||
|
||
<details> | ||
<summary>User-friendly</summary> | ||
EdgeLab provides a user-friendly platform that allows users to easily perform training on collected data, and to better understand the performance of algorithms through visualizations generated during the training process. | ||
</details> | ||
|
||
<details> | ||
<summary>Models with low computing power and high performance</summary> | ||
EdgeLab focuses on end-side AI algorithm research, and the algorithm models can be deployed on microprocessors, similar to <a href="https://www.espressif.com/en/products/socs/esp32">ESP32</a>, some <a href="https://arduino.cc">Arduino</a> development boards, and even in embedded SBCs such as <a href="https://www.raspberrypi.org">Raspberry Pi</a>. | ||
</details> | ||
|
||
<details> | ||
<summary>Supports mutiple formats for model export</summary> | ||
<a href="https://www.tensorflow.org/lite">TensorFlow Lite</a> is mainly used in microcontrollers, while <a href="https://onnx.ai">ONNX</a> is mainly used in devices with Embedded Linux. There are some special formats such as <a href="https://developer.nvidia.com/tensorrt">TensorRT</a>, <a href="https://docs.openvino.ai">OpenVINO</a> which are already well supported by OpenMMlab. EdgeLab has added TFLite model export for microcontrollers, which can be directly converted to uf2 format and drag-and-drop into the device for deployment. | ||
</details> | ||
|
||
|
||
## Acknowledgement | ||
|
||
EdgeLab referenced the following projects: | ||
|
||
- [OpenMMLab](https://openmmlab.com/) | ||
- [ONNX](https://github.com/onnx/onnx) | ||
- [NCNN](https://github.com/Tencent/ncnn) | ||
|
||
|
||
## License | ||
|
||
This project is released under the [MIT license](LICENSES). | ||
# Seeed Studio EdgeLab | ||
|
||
<div align="center"> | ||
<img width="100%" src="docs/public/images/EdgeLab-Logo.png"> | ||
<h3> <a href="https://seeed-studio.github.io/EdgeLab/"> Documentation </a> | <a href="https://github.com/Seeed-Studio/edgelab-model-zoo"> Model Zoo </a> </h3> | ||
</div> | ||
|
||
English | [简体中文](README_zh-CN.md) | ||
|
||
## Introduction | ||
|
||
Seeed Studio EdgeLab is an open-source project focused on embedded AI. We have optimized excellent algorithms from [OpenMMLab](https://github.com/open-mmlab) for real-world scenarios and made implementation more user-friendly, achieving faster and more accurate inference on embedded devices. | ||
|
||
## What's included | ||
|
||
Currently we support the following directions of algorithms: | ||
|
||
<details> | ||
<summary>Anomaly Detection (coming soon)</summary> | ||
In the real world, anomalous data is often difficult to identify, and even if it can be identified, it requires a very high cost. The anomaly detection algorithm collects normal data in a low-cost way, and anything outside normal data is considered anomalous. | ||
</details> | ||
|
||
<details> | ||
<summary>Computer Vision</summary> | ||
Here we provide a number of computer vision algorithms such as object detection, image classification, image segmentation and pose estimation. However, these algorithms cannot run on low-cost hardware. EdgeLab optimizes these computer vision algorithms to achieve good running speed and accuracy in low-end devices. | ||
</details> | ||
|
||
<details> | ||
<summary>Scenario Specific</summary> | ||
EdgeLab provides customized scenarios for specific production environments, such as identification of analog instruments, traditional digital meters, and audio classification. | ||
</details> | ||
|
||
<br> | ||
|
||
We will keep adding more algorithms in the future. Stay tuned! | ||
|
||
## Features | ||
|
||
<details> | ||
<summary>User-friendly</summary> | ||
EdgeLab provides a user-friendly platform that allows users to easily perform training on collected data, and to better understand the performance of algorithms through visualizations generated during the training process. | ||
</details> | ||
|
||
<details> | ||
<summary>Models with low computing power and high performance</summary> | ||
EdgeLab focuses on end-side AI algorithm research, and the algorithm models can be deployed on microprocessors, similar to <a href="https://www.espressif.com/en/products/socs/esp32">ESP32</a>, some <a href="https://arduino.cc">Arduino</a> development boards, and even in embedded SBCs such as <a href="https://www.raspberrypi.org">Raspberry Pi</a>. | ||
</details> | ||
|
||
<details> | ||
<summary>Supports multiple formats for model export</summary> | ||
<a href="https://www.tensorflow.org/lite">TensorFlow Lite</a> is mainly used in microcontrollers, while <a href="https://onnx.ai">ONNX</a> is mainly used in devices with Embedded Linux. There are some special formats such as <a href="https://developer.nvidia.com/tensorrt">TensorRT</a>, <a href="https://docs.openvino.ai">OpenVINO</a> which are already well supported by OpenMMlab. EdgeLab has added TFLite model export for microcontrollers, which can be directly converted to uf2 format and drag-and-drop into the device for deployment. | ||
</details> | ||
|
||
## Acknowledgement | ||
|
||
EdgeLab referenced the following projects: | ||
|
||
- [OpenMMLab](https://openmmlab.com/) | ||
- [ONNX](https://github.com/onnx/onnx) | ||
- [NCNN](https://github.com/Tencent/ncnn) | ||
|
||
## License | ||
|
||
This project is released under the [MIT license](LICENSES). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,85 +1,84 @@ | ||
# dataset settings | ||
dataset_type = 'CocoDataset' | ||
data_root = 'data/coco/' | ||
|
||
# file_client_args = dict( | ||
# backend='petrel', | ||
# path_mapping=dict({ | ||
# './data/': 's3://openmmlab/datasets/detection/', | ||
# 'data/': 's3://openmmlab/datasets/detection/' | ||
# })) | ||
file_client_args = dict(backend='disk') | ||
|
||
train_pipeline = [ | ||
dict(type='LoadImageFromFile', file_client_args=file_client_args), | ||
dict(type='LoadAnnotations', with_bbox=True), | ||
dict(type='Resize', scale=(1333, 800), keep_ratio=True), | ||
dict(type='RandomFlip', prob=0.5), | ||
dict(type='PackDetInputs') | ||
] | ||
test_pipeline = [ | ||
dict(type='LoadImageFromFile', file_client_args=file_client_args), | ||
dict(type='Resize', scale=(1333, 800), keep_ratio=True), | ||
# If you don't have a gt annotation, delete the pipeline | ||
dict(type='LoadAnnotations', with_bbox=True), | ||
dict( | ||
type='PackDetInputs', | ||
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', | ||
'scale_factor')) | ||
] | ||
train_dataloader = dict( | ||
batch_size=2, | ||
num_workers=2, | ||
persistent_workers=True, | ||
sampler=dict(type='DefaultSampler', shuffle=True), | ||
batch_sampler=dict(type='AspectRatioBatchSampler'), | ||
dataset=dict( | ||
type=dataset_type, | ||
data_root=data_root, | ||
ann_file='annotations/instances_train2017.json', | ||
data_prefix=dict(img='train2017/'), | ||
filter_cfg=dict(filter_empty_gt=True, min_size=32), | ||
pipeline=train_pipeline)) | ||
val_dataloader = dict( | ||
batch_size=1, | ||
num_workers=2, | ||
persistent_workers=True, | ||
drop_last=False, | ||
sampler=dict(type='DefaultSampler', shuffle=False), | ||
dataset=dict( | ||
type=dataset_type, | ||
data_root=data_root, | ||
ann_file='annotations/instances_val2017.json', | ||
data_prefix=dict(img='val2017/'), | ||
test_mode=True, | ||
pipeline=test_pipeline)) | ||
test_dataloader = val_dataloader | ||
|
||
val_evaluator = dict( | ||
type='CocoMetric', | ||
ann_file=data_root + 'annotations/instances_val2017.json', | ||
metric='bbox', | ||
format_only=False) | ||
test_evaluator = val_evaluator | ||
|
||
# inference on test dataset and | ||
# format the output results for submission. | ||
# test_dataloader = dict( | ||
# batch_size=1, | ||
# num_workers=2, | ||
# persistent_workers=True, | ||
# drop_last=False, | ||
# sampler=dict(type='DefaultSampler', shuffle=False), | ||
# dataset=dict( | ||
# type=dataset_type, | ||
# data_root=data_root, | ||
# ann_file=data_root + 'annotations/image_info_test-dev2017.json', | ||
# data_prefix=dict(img='test2017/'), | ||
# test_mode=True, | ||
# pipeline=test_pipeline)) | ||
# test_evaluator = dict( | ||
# type='CocoMetric', | ||
# metric='bbox', | ||
# format_only=True, | ||
# ann_file=data_root + 'annotations/image_info_test-dev2017.json', | ||
# outfile_prefix='./work_dirs/coco_detection/test') | ||
# dataset settings | ||
dataset_type = 'CocoDataset' | ||
data_root = 'data/coco/' | ||
|
||
# file_client_args = dict( | ||
# backend='petrel', | ||
# path_mapping=dict({ | ||
# './data/': 's3://openmmlab/datasets/detection/', | ||
# 'data/': 's3://openmmlab/datasets/detection/' | ||
# })) | ||
file_client_args = dict(backend='disk') | ||
|
||
train_pipeline = [ | ||
dict(type='LoadImageFromFile', file_client_args=file_client_args), | ||
dict(type='LoadAnnotations', with_bbox=True), | ||
dict(type='Resize', scale=(1333, 800), keep_ratio=True), | ||
dict(type='RandomFlip', prob=0.5), | ||
dict(type='PackDetInputs'), | ||
] | ||
test_pipeline = [ | ||
dict(type='LoadImageFromFile', file_client_args=file_client_args), | ||
dict(type='Resize', scale=(1333, 800), keep_ratio=True), | ||
# If you don't have a gt annotation, delete the pipeline | ||
dict(type='LoadAnnotations', with_bbox=True), | ||
dict(type='PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor')), | ||
] | ||
train_dataloader = dict( | ||
batch_size=2, | ||
num_workers=2, | ||
persistent_workers=True, | ||
sampler=dict(type='DefaultSampler', shuffle=True), | ||
batch_sampler=dict(type='AspectRatioBatchSampler'), | ||
dataset=dict( | ||
type=dataset_type, | ||
data_root=data_root, | ||
ann_file='annotations/instances_train2017.json', | ||
data_prefix=dict(img='train2017/'), | ||
filter_cfg=dict(filter_empty_gt=True, min_size=32), | ||
pipeline=train_pipeline, | ||
), | ||
) | ||
val_dataloader = dict( | ||
batch_size=1, | ||
num_workers=2, | ||
persistent_workers=True, | ||
drop_last=False, | ||
sampler=dict(type='DefaultSampler', shuffle=False), | ||
dataset=dict( | ||
type=dataset_type, | ||
data_root=data_root, | ||
ann_file='annotations/instances_val2017.json', | ||
data_prefix=dict(img='val2017/'), | ||
test_mode=True, | ||
pipeline=test_pipeline, | ||
), | ||
) | ||
test_dataloader = val_dataloader | ||
|
||
val_evaluator = dict( | ||
type='CocoMetric', ann_file=data_root + 'annotations/instances_val2017.json', metric='bbox', format_only=False | ||
) | ||
test_evaluator = val_evaluator | ||
|
||
# inference on test dataset and | ||
# format the output results for submission. | ||
# test_dataloader = dict( | ||
# batch_size=1, | ||
# num_workers=2, | ||
# persistent_workers=True, | ||
# drop_last=False, | ||
# sampler=dict(type='DefaultSampler', shuffle=False), | ||
# dataset=dict( | ||
# type=dataset_type, | ||
# data_root=data_root, | ||
# ann_file=data_root + 'annotations/image_info_test-dev2017.json', | ||
# data_prefix=dict(img='test2017/'), | ||
# test_mode=True, | ||
# pipeline=test_pipeline)) | ||
# test_evaluator = dict( | ||
# type='CocoMetric', | ||
# metric='bbox', | ||
# format_only=True, | ||
# ann_file=data_root + 'annotations/image_info_test-dev2017.json', | ||
# outfile_prefix='./work_dirs/coco_detection/test') |
Oops, something went wrong.