From a5988b40e517ecb6e252eab0cbac4c34469024fd Mon Sep 17 00:00:00 2001 From: Gautam Date: Tue, 2 Apr 2024 18:50:25 -0700 Subject: [PATCH] Fixed a few small spelling + grammar mistakes (#351) * fix a few spelling + grammar mistakes * fix grammar mistakes in multitag.rst * fix grammar issue in-object-detection.rst * Fix spelling issues in 3D-tracking.rst * fix a ton more spelling issues --- .../apriltag-pipelines/2D-tracking-tuning.rst | 2 +- source/docs/apriltag-pipelines/3D-tracking.rst | 6 +++--- source/docs/apriltag-pipelines/multitag.rst | 8 ++++---- .../photonvision/build-instructions.rst | 4 ++-- source/docs/hardware/selecting-hardware.rst | 18 +++++++++--------- source/docs/installation/index.rst | 2 +- .../objectDetection/about-object-detection.rst | 4 ++-- source/docs/troubleshooting/common-errors.rst | 2 +- 8 files changed, 23 insertions(+), 23 deletions(-) diff --git a/source/docs/apriltag-pipelines/2D-tracking-tuning.rst b/source/docs/apriltag-pipelines/2D-tracking-tuning.rst index 5dcd0f74..b8ee6441 100644 --- a/source/docs/apriltag-pipelines/2D-tracking-tuning.rst +++ b/source/docs/apriltag-pipelines/2D-tracking-tuning.rst @@ -4,7 +4,7 @@ Tracking Apriltags ------------------ -Before you get started tracking AprilTags, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and swtich to the "AprilTag" or "Aruco" type. You should see a screen similar to the one below. +Before you get started tracking AprilTags, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and switch to the "AprilTag" or "Aruco" type. You should see a screen similar to the one below. .. image:: images/apriltag.png :align: center diff --git a/source/docs/apriltag-pipelines/3D-tracking.rst b/source/docs/apriltag-pipelines/3D-tracking.rst index 8f1a99ee..4e06dd00 100644 --- a/source/docs/apriltag-pipelines/3D-tracking.rst +++ b/source/docs/apriltag-pipelines/3D-tracking.rst @@ -8,8 +8,8 @@ Ambiguity Translating from 2D to 3D using data from the calibration and the four tag corners can lead to "pose ambiguity", where it appears that the AprilTag pose is flipping between two different poses. You can read more about this issue `here. ` Ambiguity is calculated as the ratio of reprojection errors between two pose solutions (if they exist), where reprojection error is the error corresponding to the image distance between where the apriltag's corners are detected vs where we expect to see them based on the tag's estimated camera relative pose. -There a few steps you can take to resolve/mitigate this issue: +There are a few steps you can take to resolve/mitigate this issue: -1. Mount cameras at oblique angles so it is less likely that the tag will be seen straght on. +1. Mount cameras at oblique angles so it is less likely that the tag will be seen straight on. 2. Use the :ref:`MultiTag system ` in order to combine the corners from multiple tags to get a more accurate and unambiguous pose. -3. Reject all tag poses where the ambiguity ratio (availiable via PhotonLib) is greater than 0.2. +3. Reject all tag poses where the ambiguity ratio (available via PhotonLib) is greater than 0.2. diff --git a/source/docs/apriltag-pipelines/multitag.rst b/source/docs/apriltag-pipelines/multitag.rst index 0d64a522..1b03a664 100644 --- a/source/docs/apriltag-pipelines/multitag.rst +++ b/source/docs/apriltag-pipelines/multitag.rst @@ -1,14 +1,14 @@ MultiTag Localization ===================== -PhotonVision can combine AprilTag detections from multiple simultaniously observed AprilTags from a particular camera with information about where tags are expected to be located on the field to produce a better estimate of where the camera (and therefore robot) is located on the field. PhotonVision can calculate this multi-target result on your coprocessor, reducing CPU usage on your RoboRio. This result is sent over NetworkTables along with other detected targets as part of the ``PhotonPipelineResult`` provided by PhotonLib. +PhotonVision can combine AprilTag detections from multiple simultaneously observed AprilTags from a particular camera with information about where tags are expected to be located on the field to produce a better estimate of where the camera (and therefore robot) is located on the field. PhotonVision can calculate this multi-target result on your coprocessor, reducing CPU usage on your RoboRio. This result is sent over NetworkTables along with other detected targets as part of the ``PhotonPipelineResult`` provided by PhotonLib. -.. warning:: MultiTag requires an accurate field layout JSON be uploaded! Differences between this layout and tag's physical location will drive error in the estimated pose output. +.. warning:: MultiTag requires an accurate field layout JSON to be uploaded! Differences between this layout and the tags' physical location will drive error in the estimated pose output. Enabling MultiTag ^^^^^^^^^^^^^^^^^ -Ensure that your camera is calibrated and 3D mode is enabled. Navigate to the Output tab and enable "Do Multi-Target Estimation". This enables MultiTag using the uploaded field layout JSON to calculate your camera's pose in the field. This 3D transform will be shown as an additional table in the "targets" tab, along with the IDs of AprilTags used to compute this transform. +Ensure that your camera is calibrated and 3D mode is enabled. Navigate to the Output tab and enable "Do Multi-Target Estimation". This enables MultiTag to use the uploaded field layout JSON to calculate your camera's pose in the field. This 3D transform will be shown as an additional table in the "targets" tab, along with the IDs of AprilTags used to compute this transform. .. image:: images/multitag-ui.png :width: 600 @@ -48,6 +48,6 @@ PhotonVision ships by default with the `2024 field layout JSON `_ + * This coprocessor will likely have similar performance to the Orange Pi 5 but has a higher performance ceiling (when using more powerful CPUs). Do note that this would require extra effort to wire to the robot / get set up. More information can be found in the set up guide `here. `_ * Other coprocessors can be used but may require some extra work / command line usage in order to get it working properly. Choosing a Camera @@ -46,17 +46,17 @@ PhotonVision relies on `CSCore `_. -Reccomended Cameras +Recommended Cameras ^^^^^^^^^^^^^^^^^^^ -For colored shape detection, any non-fisheye camera supported by PhotonVision will work. We reccomend the Pi Camera V1 or a high fps USB camera. +For colored shape detection, any non-fisheye camera supported by PhotonVision will work. We recommend the Pi Camera V1 or a high fps USB camera. -For driver camera, we reccomend a USB camera with a fisheye lens, so your driver can see more of the field. +For driver camera, we recommend a USB camera with a fisheye lens, so your driver can see more of the field. -For AprilTag detection, we reccomend you use a global shutter camera that has ~100 degree diagonal FOV. This will allow you to see more AprilTags in frame, and will allow for more accurate pose estimation. You also want a camera that supports high FPS, as this will allow you to update your pose estimator at a higher frequency. +For AprilTag detection, we recommend you use a global shutter camera that has ~100 degree diagonal FOV. This will allow you to see more AprilTags in frame, and will allow for more accurate pose estimation. You also want a camera that supports high FPS, as this will allow you to update your pose estimator at a higher frequency. -* Reccomendations For AprilTag Detection +* Recommendations For AprilTag Detection * Arducam USB OV9281 - * This is the reccomended camera for AprilTag detection as it is a high FPS, global shutter camera USB camera that has a ~70 degree FOV. + * This is the recommended camera for AprilTag detection as it is a high FPS, global shutter camera USB camera that has a ~70 degree FOV. * Innomaker OV9281 * Spinel AR0144 * Pi Camera Module V1 diff --git a/source/docs/installation/index.rst b/source/docs/installation/index.rst index 65614dce..3039ceb3 100644 --- a/source/docs/installation/index.rst +++ b/source/docs/installation/index.rst @@ -7,7 +7,7 @@ This page will help you install PhotonVision on your coprocessor, wire it, and p Step 1: Software Install ------------------------ -This section will walk you through how to install PhotonVision on your coprcoessor. Your coprocessor is the device that has the camera and you are using to detect targets (ex. if you are using a Limelight / Raspberry Pi, that is your coprocessor and you should follow those instructions). +This section will walk you through how to install PhotonVision on your coprocessor. Your coprocessor is the device that has the camera and you are using to detect targets (ex. if you are using a Limelight / Raspberry Pi, that is your coprocessor and you should follow those instructions). .. warning:: You only need to install PhotonVision on the coprocessor/device that is being used to detect targets, you do NOT need to install it on the device you use to view the webdashboard. All you need to view the webdashboard is for a device to be on the same network as your vision coprocessor and an internet browser. diff --git a/source/docs/objectDetection/about-object-detection.rst b/source/docs/objectDetection/about-object-detection.rst index d4e26ede..f054c2c8 100644 --- a/source/docs/objectDetection/about-object-detection.rst +++ b/source/docs/objectDetection/about-object-detection.rst @@ -13,11 +13,11 @@ For the 2024 season, PhotonVision ships with a **pre-trained NOTE detector** (sh Tracking Objects ^^^^^^^^^^^^^^^^ -Before you get started with object detection, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and switch to the “Object Detection” type. You should see a screen similar to the image above. +Before you get started with object detection, ensure that you have followed the previous sections on installation, wiring, and networking. Next, open the Web UI, go to the top right card, and switch to the “Object Detection” type. You should see a screen similar to the image above. PhotonVision currently ships with a NOTE detector based on a `YOLOv5 model `_. This model is trained to detect one or more object "classes" (such as cars, stoplights, or in our case, NOTES) in an input image. For each detected object, the model outputs a bounding box around where in the image the object is located, what class the object belongs to, and a unitless confidence between 0 and 1. -.... note:: This model output means that while its fairly easy to say that "this rectangle probably contains a NOTE", we doesn't have any information about the NOTE's orientation or location. Further math in user code would be required to make estimates about where an object is physically located relative to the camera. +.... note:: This model output means that while its fairly easy to say that "this rectangle probably contains a NOTE", we don't have any information about the NOTE's orientation or location. Further math in user code would be required to make estimates about where an object is physically located relative to the camera. Tuning and Filtering ^^^^^^^^^^^^^^^^^^^^ diff --git a/source/docs/troubleshooting/common-errors.rst b/source/docs/troubleshooting/common-errors.rst index 85a09e70..88550a56 100644 --- a/source/docs/troubleshooting/common-errors.rst +++ b/source/docs/troubleshooting/common-errors.rst @@ -31,7 +31,7 @@ Camera won't show up ^^^^^^^^^^^^^^^^^^^^ Try these steps to :ref:`troubleshoot your camera connection `. -If you are using a USB camera, it is possible your USB Camera isn't supported by CSCore and therefore won't work with PhotonVision. See :ref:`supported hardware page for more information `, or the above Camera Troubleshooting page for more information on determining this locally. +If you are using a USB camera, it is possible your USB Camera isn't supported by CSCore and therefore won't work with PhotonVision. See :ref:`supported hardware page for more information `, or the above Camera Troubleshooting page for more information on determining this locally. Camera is consistently returning incorrect values when in 3D mode ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^