diff --git a/.gitbook/assets/ezgif_com_gif_maker_3_2924bbe7c1.gif b/.gitbook/assets/indoor-co2/ezgif_2924bbe7c1.gif similarity index 100% rename from .gitbook/assets/ezgif_com_gif_maker_3_2924bbe7c1.gif rename to .gitbook/assets/indoor-co2/ezgif_2924bbe7c1.gif diff --git a/.gitbook/assets/testing.jpg b/.gitbook/assets/indoor-co2/testing.jpg similarity index 100% rename from .gitbook/assets/testing.jpg rename to .gitbook/assets/indoor-co2/testing.jpg diff --git a/.gitbook/assets/training.jpg b/.gitbook/assets/indoor-co2/training.jpg similarity index 100% rename from .gitbook/assets/training.jpg rename to .gitbook/assets/indoor-co2/training.jpg diff --git a/esd-protection-using-computer-vision.md b/esd-protection-using-computer-vision.md index 00096b52..c4cb5525 100644 --- a/esd-protection-using-computer-vision.md +++ b/esd-protection-using-computer-vision.md @@ -70,6 +70,6 @@ This project was a lot of fun. I have worked with the Jetson Nano for a little o The memory limitations were unfortunate and took up a bit of my time, and I hope that gets resolved in the future. I even had to do my development off the Nano because I didn't have enough space to install VS Code (my IDE of choice). Not a show-stopper by any means, this is still a very capable piece of hardware. -I think this project could be further expanded in the future. You could add a Twilio interface to text a supervisor if an ESD risk is present. Different types of objects could be classified (maybe ensuring an ESD smock is worn?) and what I'm more excited about is [FOMO-AD](https://mobile.twitter.com/janjongboom/status/1575530285814362112?cxt=HHwWgMCtrdGctN0rAAAA) (the AD stands for Anomaly Detection), announced at [Edge Impulse Imagine 2022](https://edgeimpulse.com/imagine). It won't be ready until 2023 but I think there is a lot of opportunity to use that technology for recognizing what is right and what is not right in an image. I'm exciting to test its capabilities! +I think this project could be further expanded in the future. You could add a Twilio interface to text a supervisor if an ESD risk is present. Different types of objects could be classified (maybe ensuring an ESD smock is worn?) and what I'm more excited about is [FOMO-AD](https://mobile.twitter.com/janjongboom/status/1575530285814362112) (the AD stands for Anomaly Detection), announced at [Edge Impulse Imagine 2022](https://edgeimpulse.com/imagine). It won't be ready until 2023 but I think there is a lot of opportunity to use that technology for recognizing what is right and what is not right in an image. I'm exciting to test its capabilities! Thank you again to Seeed Studio for providing the hardware for me to work on. I hope to do more projects with this equipment in the future. Happy coding! diff --git a/food-irradiation-detection.md b/food-irradiation-detection.md index ce46777e..22dd2adb 100644 --- a/food-irradiation-detection.md +++ b/food-irradiation-detection.md @@ -1193,7 +1193,7 @@ After generating training and testing samples successfully, I uploaded them to m ![image](.gitbook/assets/food-irradiation/edge_set_2.png) -![image](.gitbook/assets/food-irradiation/edge_set_3.PNG) +![image](.gitbook/assets/food-irradiation/edge_set_3.png) :hash: Then, choose the data category (training or testing) and select *Infer from filename* under *Label* to deduce labels from file names automatically. diff --git a/gas-detection-thingy-91.md b/gas-detection-thingy-91.md index 486f57c2..8796c4a4 100644 --- a/gas-detection-thingy-91.md +++ b/gas-detection-thingy-91.md @@ -106,7 +106,7 @@ If you are going to be using a Linux computer for this application, make sure to sudo apt install screen ``` -Afterwards, download the official [Edge Impulse Nordic Thingy:91 firmware](https://cdn.edgeimpulse.com/firmware/nordic-thingy91.zip) and extract it. +Afterwards, download the official [Edge Impulse Nordic Thingy:91 firmware](https://cdn.edgeimpulse.com/firmware/thingy91.zip) and extract it. Next up, make sure the board is turned off and connect it to your computer. Put the board in MCUboot mode by pressing the multi-function button placed in the middle of the device and with the button pressed, turn the board on. diff --git a/indoor-co2-level-estimation-using-tinyml.md b/indoor-co2-level-estimation-using-tinyml.md index d55a8b99..33222ba8 100644 --- a/indoor-co2-level-estimation-using-tinyml.md +++ b/indoor-co2-level-estimation-using-tinyml.md @@ -10,7 +10,7 @@ Swapnil Verma Public Project Link: [https://studio.edgeimpulse.com/public/93652/latest](https://studio.edgeimpulse.com/public/93652/latest) -![Indoor CO2](.gitbook/assets/indoor-co2.jpg) +![Indoor CO2](.gitbook/assets/indoor-co2/indoor-co2.jpg) ### Problem Overview @@ -43,15 +43,15 @@ In this project, the dataset I am using is a subset of the PIROPO database \[3]. The dataset contains multiple sequences recorded in the two indoor rooms using a perspective camera. -![Indoor Environment 1](.gitbook/assets/indoor-1.jpg) +![Indoor Environment 1](.gitbook/assets/indoor-co2/indoor-1.jpg) -![Indoor Environment 2](.gitbook/assets/indoor-2.jpg) +![Indoor Environment 2](.gitbook/assets/indoor-co2/indoor-2.jpg) The original PIROPO database contains perspective as well as omnidirectional camera images. I imported the subset of the PIROPO database to the Edge Impulse via the [data acquisition](https://docs.edgeimpulse.com/docs/edge-impulse-cli/cli-uploader#upload-data-from-the-studio) tab. This tab has a cool feature called [labelling queue](https://www.edgeimpulse.com/blog/3-ways-to-do-ai-assisted-labeling-for-object-detection), which uses YOLO to label an object in the image automatically for you. -![Automatically label data using the labelling queue feature](.gitbook/assets/ezgif\_com\_gif\_maker\_3\_2924bbe7c1.gif) +![Automatically label data using the labelling queue feature](.gitbook/assets/indoor-co2/ezgif_2924bbe7c1.gif) I used this feature to label _people_ in the PIROPO images. I then divided the data into _training_ and _test_ sets using the _train/test split_ feature. While training, the Edge Impulse automatically divides the training dataset into _training_ and _validation_ datasets. @@ -59,9 +59,9 @@ I used this feature to label _people_ in the PIROPO images. I then divided the d Training and testing are done using above mentioned PIROPO dataset. I used the [FOMO](https://www.edgeimpulse.com/blog/announcing-fomo-faster-objects-more-objects) architecture by the Edge Impulse to train this model. To prepare a model using FOMO, please follow this [link](https://docs.edgeimpulse.com/docs/tutorials/object-detection/detect-objects-using-fomo). -![Training statistics](.gitbook/assets/training.jpg) +![Training statistics](.gitbook/assets/indoor-co2/training.jpg) -![Model testing results](.gitbook/assets/testing.jpg) +![Model testing results](.gitbook/assets/indoor-co2/testing.jpg) The training F1 score of my model is 91.6%, and the testing accuracy is 86.42%. For live testing, I deployed the model by building openMV firmware and flashed that firmware using the OpenMV IDE. A video of live testing performed on Arduino Portenta H7 is attached in the Demo section below. @@ -75,7 +75,7 @@ This section contains a step-by-step guide to downloading and running the softwa ### How does it work? -![System overview](.gitbook/assets/how-it-works.jpg) +![System overview](.gitbook/assets/indoor-co2/how-it-works.jpg) This system is quite simple. The Vision shield (or any camera) captures a 240x240 image of the environment and passes it to the FOMO model prepared using Edge Impulse. This model then identifies the people in the image and passes the number of people to the CO2 level estimation function every minute. The function then estimates the amount of CO2 using the below formula. diff --git a/ml-knob-eye.md b/ml-knob-eye.md index 39746e4f..92053672 100644 --- a/ml-knob-eye.md +++ b/ml-knob-eye.md @@ -71,7 +71,7 @@ To start the training, a good number of pictures with variations of the knob in You can download a sample data acquisition script, and the recording script from: -[https://github.com/ronibandini/MlKnobReading](https://github.com/ronibandini/MlKnobReading) +[https://github.com/ronibandini/MLAnalogKnobReading](https://github.com/ronibandini/MLAnalogKnobReading) Place the camera in the 3d printed arm, around 10cm over the knob, with good lighting. Place the knob in the "Minimum" (low) position. Then on the Raspberry Pi, run: diff --git a/nvidia-omniverse-synthetic-data.md b/nvidia-omniverse-synthetic-data.md index 5681e695..0f098960 100644 --- a/nvidia-omniverse-synthetic-data.md +++ b/nvidia-omniverse-synthetic-data.md @@ -195,7 +195,7 @@ def dome_lights(num=1): rep.randomizer.register(dome_lights) ``` -For more information about using lights with Replicator, you can check out the [NVIDIA documentation](https://docs.omniverse.nvidia.com/app_code/prod_materials-and-rendering/lighting.html). +For more information about using lights with Replicator, you can check out the [NVIDIA documentation](https://docs.omniverse.nvidia.com/materials-and-rendering/latest/lighting.html). ### Fruits @@ -245,7 +245,7 @@ camera2 = rep.create.camera( render_product2 = rep.create.render_product(camera2, (512, 512)) ``` -For more information about using cameras with Replicator, you can check out the [NVIDIA documentation](https://docs.omniverse.nvidia.com/app_isaacsim/prod_materials-and-rendering/cameras.html). +For more information about using cameras with Replicator, you can check out the [NVIDIA documentation](https://docs.omniverse.nvidia.com/materials-and-rendering/latest/cameras.html). ### Basic Writer diff --git a/occupancy-sensing-with-silabs.md b/occupancy-sensing-with-silabs.md index 2dc1367e..79a3a6a5 100644 --- a/occupancy-sensing-with-silabs.md +++ b/occupancy-sensing-with-silabs.md @@ -52,7 +52,7 @@ A very important mention concerning privacy is that we will use the microphones - [Simplicity Commander](https://community.silabs.com/s/article/simplicity-commander?language=en_US) - a utility that provides command line and GUI access to the debug features of EFM32 devices. It enables us to flash the firmware on the device. - The [Edge Impulse CLI](https://docs.edgeimpulse.com/docs/edge-impulse-cli/cli-installation) - A suite of tools that will enable you to control the xG24 Kit without being connected to the internet and ultimately, collect raw data and trigger in-system inferences - - The [base firmware image provided by Edge Impulse](https://cdn.edgeimpulse.com/firmware/silabs-xg24-devkit.bin) - enables you to connect your SiLabs kit to your project and do data acquisition straight from the online platform. + - The [base firmware image provided by Edge Impulse](https://cdn.edgeimpulse.com/firmware/silabs-xg24.zip) - enables you to connect your SiLabs kit to your project and do data acquisition straight from the online platform. ## Hardware Setup diff --git a/renesas-ra6m5-getting-started.md b/renesas-ra6m5-getting-started.md index 40fd825b..62a535d4 100644 --- a/renesas-ra6m5-getting-started.md +++ b/renesas-ra6m5-getting-started.md @@ -84,7 +84,7 @@ To begin, you'll need to create an Edge Impulse account and a project in the Edg The next step is connecting our Renesas CK-RA6M5 board to the Edge Impulse Studio, so we can ingest sensor data for the machine learning model. Please follow the below steps to do so: - Connect the Renesas CK-RA6M5 board to the computer by following the steps mentioned in the _Quick Start_ section. -- Open a terminal or command prompt and type `edge-impulse-daemon`. The [Edge Impulse daemon](https://docs.edgeimpulse.com/docs/Edge Impulse-cli/cli-daemon) will start and prompt for user credentials. +- Open a terminal or command prompt and type `edge-impulse-daemon`. The [Edge Impulse daemon](https://docs.edgeimpulse.com/docs/edge-impulse-cli/cli-daemon) will start and prompt for user credentials. - After providing user credentials, it will prompt you to select an Edge Impulse project. Please navigate and select the project created in the previous steps, using the arrow keys. ![Daemon](.gitbook/assets/renesas-ra6m5-getting-started/daemon.jpg) diff --git a/renesas-rzv2l-pose-detection.md b/renesas-rzv2l-pose-detection.md index f4863328..bb99a174 100644 --- a/renesas-rzv2l-pose-detection.md +++ b/renesas-rzv2l-pose-detection.md @@ -177,7 +177,7 @@ The 2 stage pipeline runs sequentially and the more objects detected the more cl While this pipeline can be deployed to any Linux board that supports EIM, it can be used with DRP-AI on the Renesas RZ/V2L Eval kit or RZ/Board leveraging the highly performant and low power DRP-AI by selecting these options in Edge Impulse Studio as shown earlier. By deploying to the RZ/V2L you will achieve the lowest power consumption vs framerate against any of the other supported platforms. YOLO Object Detection also ensures you get the level of performance needed for demanding applications. -The application consists of two files [app.py](http://app.py) which contains the main 2 stage pipeline and web server and [eim.py](http://eim.py) which is a custom Python SDK for using EIM’s in your own application +The application consists of two files `app.py` which contains the main 2 stage pipeline and web server and `eim.py` which is a custom Python SDK for using EIM’s in your own application To configure the application various configuration options are available in the Application Configuration Options section near the top of the application: diff --git a/renesas-rzv2l-product-quality-inspection.md b/renesas-rzv2l-product-quality-inspection.md index 68f306e2..98ba0e72 100644 --- a/renesas-rzv2l-product-quality-inspection.md +++ b/renesas-rzv2l-product-quality-inspection.md @@ -110,7 +110,7 @@ ssh root@smarc-rzv2l Note: if the `smarc-rzv2l` hostname is not identified on your network, you can use the board's local IP address instead. -![RZ/V2L with camera](.gitbook/assets/renesas-rzv2l-product-quality-inspection/img10-RZ_V2L-with-camera.JPG) +![RZ/V2L with camera](.gitbook/assets/renesas-rzv2l-product-quality-inspection/img10-RZ-V2L-with-camera.JPG) To run the model locally on the RZ/V2L we can run the command `edge-impulse-linux-runner` which lets us log in to our Edge Impulse account and select a project. The model will be downloaded and inference will start automatically. diff --git a/ros2-part2-microros.md b/ros2-part2-microros.md index 278343bd..321f11c6 100644 --- a/ros2-part2-microros.md +++ b/ros2-part2-microros.md @@ -87,7 +87,7 @@ To add it to your MicroROS environment, navigate to the MicroROS Arduino library ~/Arduino/libraries/micro_ros_arduino-2.0.5-humble/extras/library_generation/extra_packages ``` -Paste the directory there, **return to the main** `micro_ros_arduino-2.0.5-humble` **directory,** and use the docker commands from [this part](micro_ros_arduino-2.0.5-humble) of the MicroROS Arduino readme: +Paste the directory there, **return to the main** `micro_ros_arduino-2.0.5-humble` **directory,** and use the docker commands from [this part](https://github.com/micro-ROS/micro_ros_arduino) of the MicroROS Arduino readme: ``` docker pull microros/micro_ros_static_library_builder:humble