Skip to content

crespum/edge-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 

Repository files navigation

AI at the edge

A curated list of hardware, software, frameworks and other resources for Artificial Intelligence at the edge. Inspired by awesome-dataviz.

Contents

Hardware

  • OpenMV - A camera that runs with MicroPython on ARM Cortex M6/M7 and great support for computer vision algorithms. Now with support for Tensorflow Lite too.
  • JeVois - A TensorFlow-enabled camera module.
  • Edge TPU - Google’s purpose-built ASIC designed to run inference at the edge.
  • Movidius - Intel's family of SoCs designed specifically for low power on-device computer vision and neural network applications.
    • UP AI Edge - Line of products based on Intel Movidius VPUs (including Myriad 2 and Myriad X) and Intel Cyclone FPGAs.
    • DepthAI - An embedded platform for combining Depth and AI, built around Myriad X
  • NVIDIA Jetson - High-performance embedded system-on-module to unlock deep learning, computer vision, GPU computing, and graphics in network-constrained environments.
    • Jetson TX1
    • Jetson TX2
    • Jetson Nano
  • Artificial Intelligence Radio - Transceiver (AIR-T) - High-performance SDR seamlessly integrated with state-of-the-art deep learning hardware.
  • Kendryte K210 - Dual-core, RISC-V chip with convolutional neural network acceleration using 64 KLUs (Kendryte Arithmetic Logic Unit).
    • Sipeed M1 - Based on the Kendryte K210, the module adds WiFi connectivity and an external flash memory.
    • M5StickV - AIoT(AI+IoT) Camera powered by Kendryte K210
    • UNIT-V - AI Camera powered by Kendryte K210 (lower-end M5StickV)
  • Kendryte K510 - Tri-core RISC-V processor clocked with AI accelerators.
  • GreenWaves GAP8 - RISC-V-based chip with hardware acceleration for convolutional operations.
  • Ultra96 - Embedded development platform featuring a Xilinx UltraScale+ MPSoC FPGA.
  • Apollo3 Blue - SparkFun Edge Development Board powered by a Cortex M4 from Ambiq Micro.
  • Google Coral - Platform of hardware components and software tools for local AI products based on Google Edge TPU coprocessor.
    • Dev boards
    • USB Accelerators
    • PCIe / M.2 modules
  • Gyrfalcon Technology Lighspeeur - Family of chips optimized for edge computing.
  • ARM microNPU - Processors designed to accelerate ML inference (being the first one the Ethos-U55).
  • Espressif ESP32-S3 - SoC similar to the well-known ESP32 with support for AI acceleration (among many other interesting differences).
  • Maxim MAX78000 - SoC based on a Cortex-M4 that includes a CNN accelerator.
  • Beagleboard BeagleV - Open Source RISC-V-based Linux board that includes a Neural Network Engine.
  • Syntiant TinyML - Development kit based on the Syntiant NDP101 Neural Decision Processor and a SAMD21 Cortex-M0+.

Software

  • TensorFlow Lite - Lightweight solution for mobile and embedded devices which enables on-device machine learning inference with low latency and a small binary size.
  • TensorFlow Lite for Microcontrollers - Port of TF Lite for microcontrollers and other devices with only kilobytes of memory. Born from a merge with uTensor.
  • Embedded Learning Library (ELL) - Microsoft's library to deploy intelligent machine-learned models onto resource constrained platforms and small single-board computers.
  • uTensor - AI inference library based on mbed (an RTOS for ARM chipsets) and TensorFlow.
  • CMSIS NN - A collection of efficient neural network kernels developed to maximize the performance and minimize the memory footprint of neural networks on Cortex-M processor cores.
  • ARM Compute Library - Set of optimized functions for image processing, computer vision, and machine learning.
  • Qualcomm Neural Processing SDK for AI - Libraries to developers run NN models on Snapdragon mobile platforms taking advantage of the CPU, GPU and/or DSP.
  • ST X-CUBE-AI - Toolkit for generating NN optimiezed for STM32 MCUs.
  • ST NanoEdgeAIStudio - Tool that generates a model to be loaded into an STM32 MCU.
  • Neural Network on Microcontroller (NNoM) - Higher-level layer-based Neural Network library specifically for microcontrollers. Support for CMSIS-NN.
  • nncase - Open deep learning compiler stack for Kendryte K210 AI accelerator.
  • deepC - Deep learning compiler and inference framework targeted to embedded platform.
  • uTVM - MicroTVM is an open source tool to optimize tensor programs.
  • Edge Impulse - Interactive platform to generate models that can run in microcontrollers. They are also quite active on social netwoks talking about recent news on EdgeAI/TinyML.
  • Qeexo AutoML - Interactive platform to generate AI models targetted to microcontrollers.
  • mlpack - C++ header-only fast machine learning library that focuses on lightweight deployment. It has a wide variety of machine learning algorithms with the possibility to realize on-device learning on MPUs.
  • AIfES - platform-independent and standalone AI software framework optimized for embedded systems.
  • onnx2c - ONNX to C compiler targeting "Tiny ML".

Other interesting resources

Contributing

  • Please check for duplicates first.
  • Keep descriptions short, simple and unbiased.
  • Please make an individual commit for each suggestion.
  • Add a new category if needed.

Thanks for your suggestions!

License

CC0

To the extent possible under law, Xabi Crespo has waived all copyright and related or neighboring rights to this work.

Releases

No releases published

Packages

No packages published