Skip to content

Commit

Permalink
chore: format style, fix spell
Browse files Browse the repository at this point in the history
  • Loading branch information
nullptr committed Jul 10, 2023
1 parent ed4ed61 commit 743bfcb
Show file tree
Hide file tree
Showing 168 changed files with 4,557 additions and 4,670 deletions.
9 changes: 5 additions & 4 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,13 @@ A clear and concise description of what the bug is.

**Environment**
Environment you use when bug appears:

1. Python version
2. PyTorch Version
3. MMCV Vesion
3. MMCV Version
4. EdgeLab Version
4. Code you run
5. The detailed error
5. Code you run
6. The detailed error

**Additional context**
Add any other context about the problem here.
Add any other context about the problem here.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@ If there is an official code release or third-party implementations, please also

**Additional context**
Add any other context or screenshots about the feature request here.
If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.
If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.
133 changes: 64 additions & 69 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,69 +1,64 @@
# Seeed Studio EdgeLab

<div align="center">
<img width="100%" src="docs/public/images/EdgeLab-Logo.png">
<h3> <a href="https://seeed-studio.github.io/EdgeLab/"> Documentation </a> | <a href="https://github.com/Seeed-Studio/edgelab-model-zoo"> Model Zoo </a> </h3>
</div>

English | [简体中文](README_zh-CN.md)


## Introduction

Seeed Studio EdgeLab is an open-source project focused on embedded AI. We have optimized excellent algorithms from [OpenMMLab](https://github.com/open-mmlab) for real-world scenarios and made implemention more user-friendly, achieving faster and more accurate inference on embedded devices.


## What's included

Currently we support the following directions of algorithms:

<details>
<summary>Anomaly Detection (coming soon)</summary>
In the real world, anomalous data is often difficult to identify, and even if it can be identified, it requires a very high cost. The anomaly detection algorithm collects normal data in a low-cost way, and anything outside normal data is considered anomalous.
</details>

<details>
<summary>Computer Vision</summary>
Here we provide a number of computer vision algorithms such as object detection, image classification, image segmentation and pose estimation. However, these algorithms cannot run on low-cost hardware. EdgeLab optimizes these computer vision algorithms to achieve good running speed and accuracy in low-end devices.
</details>

<details>
<summary>Scenario Specific</summary>
EdgeLab provides customized scenarios for specific production environments, such as identification of analog instruments, traditional digital meters, and audio classification.
</details>

<br>

We will keep adding more algorithms in the future. Stay tuned!


## Features

<details>
<summary>User-friendly</summary>
EdgeLab provides a user-friendly platform that allows users to easily perform training on collected data, and to better understand the performance of algorithms through visualizations generated during the training process.
</details>

<details>
<summary>Models with low computing power and high performance</summary>
EdgeLab focuses on end-side AI algorithm research, and the algorithm models can be deployed on microprocessors, similar to <a href="https://www.espressif.com/en/products/socs/esp32">ESP32</a>, some <a href="https://arduino.cc">Arduino</a> development boards, and even in embedded SBCs such as <a href="https://www.raspberrypi.org">Raspberry Pi</a>.
</details>

<details>
<summary>Supports mutiple formats for model export</summary>
<a href="https://www.tensorflow.org/lite">TensorFlow Lite</a> is mainly used in microcontrollers, while <a href="https://onnx.ai">ONNX</a> is mainly used in devices with Embedded Linux. There are some special formats such as <a href="https://developer.nvidia.com/tensorrt">TensorRT</a>, <a href="https://docs.openvino.ai">OpenVINO</a> which are already well supported by OpenMMlab. EdgeLab has added TFLite model export for microcontrollers, which can be directly converted to uf2 format and drag-and-drop into the device for deployment.
</details>


## Acknowledgement

EdgeLab referenced the following projects:

- [OpenMMLab](https://openmmlab.com/)
- [ONNX](https://github.com/onnx/onnx)
- [NCNN](https://github.com/Tencent/ncnn)


## License

This project is released under the [MIT license](LICENSES).
# Seeed Studio EdgeLab

<div align="center">
<img width="100%" src="docs/public/images/EdgeLab-Logo.png">
<h3> <a href="https://seeed-studio.github.io/EdgeLab/"> Documentation </a> | <a href="https://github.com/Seeed-Studio/edgelab-model-zoo"> Model Zoo </a> </h3>
</div>

English | [简体中文](README_zh-CN.md)

## Introduction

Seeed Studio EdgeLab is an open-source project focused on embedded AI. We have optimized excellent algorithms from [OpenMMLab](https://github.com/open-mmlab) for real-world scenarios and made implementation more user-friendly, achieving faster and more accurate inference on embedded devices.

## What's included

Currently we support the following directions of algorithms:

<details>
<summary>Anomaly Detection (coming soon)</summary>
In the real world, anomalous data is often difficult to identify, and even if it can be identified, it requires a very high cost. The anomaly detection algorithm collects normal data in a low-cost way, and anything outside normal data is considered anomalous.
</details>

<details>
<summary>Computer Vision</summary>
Here we provide a number of computer vision algorithms such as object detection, image classification, image segmentation and pose estimation. However, these algorithms cannot run on low-cost hardware. EdgeLab optimizes these computer vision algorithms to achieve good running speed and accuracy in low-end devices.
</details>

<details>
<summary>Scenario Specific</summary>
EdgeLab provides customized scenarios for specific production environments, such as identification of analog instruments, traditional digital meters, and audio classification.
</details>

<br>

We will keep adding more algorithms in the future. Stay tuned!

## Features

<details>
<summary>User-friendly</summary>
EdgeLab provides a user-friendly platform that allows users to easily perform training on collected data, and to better understand the performance of algorithms through visualizations generated during the training process.
</details>

<details>
<summary>Models with low computing power and high performance</summary>
EdgeLab focuses on end-side AI algorithm research, and the algorithm models can be deployed on microprocessors, similar to <a href="https://www.espressif.com/en/products/socs/esp32">ESP32</a>, some <a href="https://arduino.cc">Arduino</a> development boards, and even in embedded SBCs such as <a href="https://www.raspberrypi.org">Raspberry Pi</a>.
</details>

<details>
<summary>Supports multiple formats for model export</summary>
<a href="https://www.tensorflow.org/lite">TensorFlow Lite</a> is mainly used in microcontrollers, while <a href="https://onnx.ai">ONNX</a> is mainly used in devices with Embedded Linux. There are some special formats such as <a href="https://developer.nvidia.com/tensorrt">TensorRT</a>, <a href="https://docs.openvino.ai">OpenVINO</a> which are already well supported by OpenMMlab. EdgeLab has added TFLite model export for microcontrollers, which can be directly converted to uf2 format and drag-and-drop into the device for deployment.
</details>

## Acknowledgement

EdgeLab referenced the following projects:

- [OpenMMLab](https://openmmlab.com/)
- [ONNX](https://github.com/onnx/onnx)
- [NCNN](https://github.com/Tencent/ncnn)

## License

This project is released under the [MIT license](LICENSES).
5 changes: 0 additions & 5 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,10 @@

[English](README.md) | 简体中文


## 简介

Seeed Studio EdgeLab 是一个专注于嵌入式人工智能的开源项目。我们对 [OpenMMLab](https://github.com/open-mmlab) 的优秀算法进行了优化,使其适用于现实世界的场景,并使实施更加人性化,在嵌入式设备上实现更快和更准确的推理。


## 包括什么

目前我们支持以下的算法方向:
Expand All @@ -36,7 +34,6 @@ EdgeLab 为特定的生产环境提供定制场景的解决方案,例如模拟

我们将在未来不断增加更多的算法。敬请关注!


## 特点介绍

<details>
Expand All @@ -54,7 +51,6 @@ EdgeLab 专注于终端人工智能算法研究,算法模型可以部署在微
<a href="https://www.tensorflow.org/lite">TensorFlow Lite</a> 主要用于微控制器,而 <a href="https://onnx.ai">ONNX</a> 主要用于嵌入式 Linux 的设备。有一些特殊的格式,如 <a href="https://developer.nvidia.com/tensorrt">TensorRT</a>、<a href="https://docs.openvino.ai">OpenVINO</a>,已经被 OpenMMlab 很好地支持.
</details>


## 致谢

EdgeLab 参考了以下项目:
Expand All @@ -63,7 +59,6 @@ EdgeLab 参考了以下项目:
- [ONNX](https://github.com/onnx/onnx)
- [NCNN](https://github.com/Tencent/ncnn)


## 开源许可证

该项目采用 [MIT 开源许可证](LICENSES)
169 changes: 84 additions & 85 deletions configs/_base_/datasets/coco_detection.py
Original file line number Diff line number Diff line change
@@ -1,85 +1,84 @@
# dataset settings
dataset_type = 'CocoDataset'
data_root = 'data/coco/'

# file_client_args = dict(
# backend='petrel',
# path_mapping=dict({
# './data/': 's3://openmmlab/datasets/detection/',
# 'data/': 's3://openmmlab/datasets/detection/'
# }))
file_client_args = dict(backend='disk')

train_pipeline = [
dict(type='LoadImageFromFile', file_client_args=file_client_args),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
dict(type='RandomFlip', prob=0.5),
dict(type='PackDetInputs')
]
test_pipeline = [
dict(type='LoadImageFromFile', file_client_args=file_client_args),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
batch_sampler=dict(type='AspectRatioBatchSampler'),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_train2017.json',
data_prefix=dict(img='train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_val2017.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=test_pipeline))
test_dataloader = val_dataloader

val_evaluator = dict(
type='CocoMetric',
ann_file=data_root + 'annotations/instances_val2017.json',
metric='bbox',
format_only=False)
test_evaluator = val_evaluator

# inference on test dataset and
# format the output results for submission.
# test_dataloader = dict(
# batch_size=1,
# num_workers=2,
# persistent_workers=True,
# drop_last=False,
# sampler=dict(type='DefaultSampler', shuffle=False),
# dataset=dict(
# type=dataset_type,
# data_root=data_root,
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# data_prefix=dict(img='test2017/'),
# test_mode=True,
# pipeline=test_pipeline))
# test_evaluator = dict(
# type='CocoMetric',
# metric='bbox',
# format_only=True,
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# outfile_prefix='./work_dirs/coco_detection/test')
# dataset settings
dataset_type = 'CocoDataset'
data_root = 'data/coco/'

# file_client_args = dict(
# backend='petrel',
# path_mapping=dict({
# './data/': 's3://openmmlab/datasets/detection/',
# 'data/': 's3://openmmlab/datasets/detection/'
# }))
file_client_args = dict(backend='disk')

train_pipeline = [
dict(type='LoadImageFromFile', file_client_args=file_client_args),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
dict(type='RandomFlip', prob=0.5),
dict(type='PackDetInputs'),
]
test_pipeline = [
dict(type='LoadImageFromFile', file_client_args=file_client_args),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type='LoadAnnotations', with_bbox=True),
dict(type='PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor')),
]
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
batch_sampler=dict(type='AspectRatioBatchSampler'),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_train2017.json',
data_prefix=dict(img='train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline,
),
)
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_val2017.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=test_pipeline,
),
)
test_dataloader = val_dataloader

val_evaluator = dict(
type='CocoMetric', ann_file=data_root + 'annotations/instances_val2017.json', metric='bbox', format_only=False
)
test_evaluator = val_evaluator

# inference on test dataset and
# format the output results for submission.
# test_dataloader = dict(
# batch_size=1,
# num_workers=2,
# persistent_workers=True,
# drop_last=False,
# sampler=dict(type='DefaultSampler', shuffle=False),
# dataset=dict(
# type=dataset_type,
# data_root=data_root,
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# data_prefix=dict(img='test2017/'),
# test_mode=True,
# pipeline=test_pipeline))
# test_evaluator = dict(
# type='CocoMetric',
# metric='bbox',
# format_only=True,
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# outfile_prefix='./work_dirs/coco_detection/test')
Loading

0 comments on commit 743bfcb

Please sign in to comment.