From 70a1f80eac14056ac95f7e38ba8c3d352b4ae8f3 Mon Sep 17 00:00:00 2001 From: Matt Date: Sat, 20 Jan 2024 21:24:28 -0500 Subject: [PATCH] add rknn 3d admonishment (#327) --- source/docs/apriltag-pipelines/multitag.rst | 2 +- source/docs/objectDetection/about-object-detection.rst | 6 +++++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/source/docs/apriltag-pipelines/multitag.rst b/source/docs/apriltag-pipelines/multitag.rst index e918859..134ba19 100644 --- a/source/docs/apriltag-pipelines/multitag.rst +++ b/source/docs/apriltag-pipelines/multitag.rst @@ -1,7 +1,7 @@ MultiTag Localization ===================== -PhotonVision can combine AprilTag detections from multiple simultaniously observed AprilTags from a particular camera wih information about where tags are expected to be located on the field to produce a better estimate of where the camera (and therefore robot) is located on the field. PhotonVision can calculate this multi-target result on your coprocessor, reducing CPU usage on your RoboRio. This result is sent over NetworkTables along with other detected targets as part of the ``PhotonPipelineResult`` provided by PhotonLib. +PhotonVision can combine AprilTag detections from multiple simultaniously observed AprilTags from a particular camera wih information about where tags are expected to be located on the field to produce a better estimate of where the camera (and therefore robot) is located on the field. PhotonVision can calculate this multi-target result on your coprocessor, reducing CPU usage on your RoboRio. This result is sent over NetworkTables along with other detected targets as part of the ``PhotonPipelineResult`` provided by PhotonLib. .. warning:: MultiTag requires an accurate field layout JSON be uploaded! Differences between this layout and tag's physical location will drive error in the estimated pose output. diff --git a/source/docs/objectDetection/about-object-detection.rst b/source/docs/objectDetection/about-object-detection.rst index 7385060..0b4fcc8 100644 --- a/source/docs/objectDetection/about-object-detection.rst +++ b/source/docs/objectDetection/about-object-detection.rst @@ -15,6 +15,10 @@ Tracking Objects Before you get started with object detection, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and swtich to the “Object Detection” type. You should see a screen similar to the image above. +PhotonVision currently ships with a NOTE detector based on a `YOLOv5 model `_. This model is trained to detect one or more object "classes" (such as cars, stoplights, or in our case, NOTES) in an input image. For each detected object, the model outputs a bounding box around where in the image the object is located, what class the object belongs to, and a unitless confidence between 0 and 1. + +.... note:: This model output means that while its fairly easy to say that "this rectangle probably contains a NOTE", we doesn't have any information about the NOTE's orientation or location. Further math in user code would be required to make estimates about where an object is physically located relative to the camera. + Tuning and Filtering ^^^^^^^^^^^^^^^^^^^^ @@ -37,7 +41,7 @@ Coming soon! Uploading Custom Models ^^^^^^^^^^^^^^^^^^^^^^^ -.. warning:: PhotonVision currently ONLY supports YOLOV5 models trained and converted to ``.rknn`` format for RK3588 CPUs! Other models require different post-processing code and will NOT work. The model conversion process is also highly particular. Proceed with care. +.. warning:: PhotonVision currently ONLY supports YOLOv5 models trained and converted to ``.rknn`` format for RK3588 CPUs! Other models require different post-processing code and will NOT work. The model conversion process is also highly particular. Proceed with care. Our `pre-trained NOTE model `_ is automatically extracted from the JAR when PhotonVision starts, only if a file named “note-640-640-yolov5s.rknn” and "labels.txt" does not exist in the folder ``photonvision_config/models/``. This technically allows power users to replace the model and label files with new ones without rebuilding Photon from source and uploading a new JAR.