Skip to content
This repository has been archived by the owner on Jul 16, 2024. It is now read-only.

Commit

Permalink
Fixed a few small spelling + grammar mistakes (#351)
Browse files Browse the repository at this point in the history
* fix a few spelling + grammar mistakes

* fix grammar mistakes in multitag.rst

* fix grammar issue in-object-detection.rst

* Fix spelling issues in 3D-tracking.rst

* fix a ton more spelling issues
  • Loading branch information
gautvm authored Apr 3, 2024
1 parent 5ad5e32 commit a5988b4
Show file tree
Hide file tree
Showing 8 changed files with 23 additions and 23 deletions.
2 changes: 1 addition & 1 deletion source/docs/apriltag-pipelines/2D-tracking-tuning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Tracking Apriltags
------------------

Before you get started tracking AprilTags, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and swtich to the "AprilTag" or "Aruco" type. You should see a screen similar to the one below.
Before you get started tracking AprilTags, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and switch to the "AprilTag" or "Aruco" type. You should see a screen similar to the one below.

.. image:: images/apriltag.png
:align: center
Expand Down
6 changes: 3 additions & 3 deletions source/docs/apriltag-pipelines/3D-tracking.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ Ambiguity

Translating from 2D to 3D using data from the calibration and the four tag corners can lead to "pose ambiguity", where it appears that the AprilTag pose is flipping between two different poses. You can read more about this issue `here. <https://docs.wpilib.org/en/stable/docs/software/vision-processing/apriltag/apriltag-intro.html#d-to-3d-ambiguity>` Ambiguity is calculated as the ratio of reprojection errors between two pose solutions (if they exist), where reprojection error is the error corresponding to the image distance between where the apriltag's corners are detected vs where we expect to see them based on the tag's estimated camera relative pose.

There a few steps you can take to resolve/mitigate this issue:
There are a few steps you can take to resolve/mitigate this issue:

1. Mount cameras at oblique angles so it is less likely that the tag will be seen straght on.
1. Mount cameras at oblique angles so it is less likely that the tag will be seen straight on.
2. Use the :ref:`MultiTag system <docs/apriltag-pipelines/multitag:MultiTag Localization>` in order to combine the corners from multiple tags to get a more accurate and unambiguous pose.
3. Reject all tag poses where the ambiguity ratio (availiable via PhotonLib) is greater than 0.2.
3. Reject all tag poses where the ambiguity ratio (available via PhotonLib) is greater than 0.2.
8 changes: 4 additions & 4 deletions source/docs/apriltag-pipelines/multitag.rst
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
MultiTag Localization
=====================

PhotonVision can combine AprilTag detections from multiple simultaniously observed AprilTags from a particular camera with information about where tags are expected to be located on the field to produce a better estimate of where the camera (and therefore robot) is located on the field. PhotonVision can calculate this multi-target result on your coprocessor, reducing CPU usage on your RoboRio. This result is sent over NetworkTables along with other detected targets as part of the ``PhotonPipelineResult`` provided by PhotonLib.
PhotonVision can combine AprilTag detections from multiple simultaneously observed AprilTags from a particular camera with information about where tags are expected to be located on the field to produce a better estimate of where the camera (and therefore robot) is located on the field. PhotonVision can calculate this multi-target result on your coprocessor, reducing CPU usage on your RoboRio. This result is sent over NetworkTables along with other detected targets as part of the ``PhotonPipelineResult`` provided by PhotonLib.

.. warning:: MultiTag requires an accurate field layout JSON be uploaded! Differences between this layout and tag's physical location will drive error in the estimated pose output.
.. warning:: MultiTag requires an accurate field layout JSON to be uploaded! Differences between this layout and the tags' physical location will drive error in the estimated pose output.

Enabling MultiTag
^^^^^^^^^^^^^^^^^

Ensure that your camera is calibrated and 3D mode is enabled. Navigate to the Output tab and enable "Do Multi-Target Estimation". This enables MultiTag using the uploaded field layout JSON to calculate your camera's pose in the field. This 3D transform will be shown as an additional table in the "targets" tab, along with the IDs of AprilTags used to compute this transform.
Ensure that your camera is calibrated and 3D mode is enabled. Navigate to the Output tab and enable "Do Multi-Target Estimation". This enables MultiTag to use the uploaded field layout JSON to calculate your camera's pose in the field. This 3D transform will be shown as an additional table in the "targets" tab, along with the IDs of AprilTags used to compute this transform.

.. image:: images/multitag-ui.png
:width: 600
Expand Down Expand Up @@ -48,6 +48,6 @@ PhotonVision ships by default with the `2024 field layout JSON <https://github.c
:width: 600
:alt: The currently saved field layout in the Photon UI

An updated field layout can be uploaded by navigating to the "Device Control" card of the Settings tab and clicking "Import Settings". In the pop-up dialog, select the "Apriltag Layout" type and choose a updated layout JSON (in the same format as the WPILib field layout JSON linked above) using the paperclip icon, and select "Import Settings". The AprilTag layout in the "AprilTag Field Layout" card below should update to reflect the new layout.
An updated field layout can be uploaded by navigating to the "Device Control" card of the Settings tab and clicking "Import Settings". In the pop-up dialog, select the "AprilTag Layout" type and choose an updated layout JSON (in the same format as the WPILib field layout JSON linked above) using the paperclip icon, and select "Import Settings". The AprilTag layout in the "AprilTag Field Layout" card below should be updated to reflect the new layout.

.. note:: Currently, there is no way to update this layout using PhotonLib, although this feature is under consideration.
4 changes: 2 additions & 2 deletions source/docs/contributing/photonvision/build-instructions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Get the source code from git:
git clone https://github.com/PhotonVision/photonvision
or alternatively download to source code from github and extract the zip:
or alternatively download the source code from github and extract the zip:

.. image:: assets/git-download.png
:width: 600
Expand Down Expand Up @@ -96,7 +96,7 @@ Running the following command under the root directory will build the jar under
Build and Run PhotonVision on a Raspberry Pi Coprocessor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

As a convinenece, the build has built in `deploy` command which builds, deploys, and starts the current source code on a coprocessor.
As a convenience, the build has a built-in `deploy` command which builds, deploys, and starts the current source code on a coprocessor.

An architecture override is required to specify the deploy target's architecture.

Expand Down
18 changes: 9 additions & 9 deletions source/docs/hardware/selecting-hardware.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Minimum System Requirements
^^^^^^^^^^^^^^^^^^^^^^^^^^^

* Ubuntu 22.04 LTS or Windows 10/11
* We don't reccomend using Windows for anything except testing out the system on a local machine.
* We don't recommend using Windows for anything except testing out the system on a local machine.
* CPU: ARM Cortex-A53 (the CPU on Raspberry Pi 3) or better
* At least 8GB of storage
* 2GB of RAM
Expand All @@ -20,7 +20,7 @@ Minimum System Requirements
* Note that we only support using the Raspberry Pi's MIPI-CSI port, other MIPI-CSI ports from other coprocessors may not work.
* Ethernet port for networking

Coprocessor Reccomendations
Coprocessor Recommendations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

When selecting a coprocessor, it is important to consider various factors, particularly when it comes to AprilTag detection. Opting for a coprocessor with a more powerful CPU can generally result in higher FPS AprilTag detection, leading to more accurate pose estimation. However, it is important to note that there is a point of diminishing returns, where the benefits of a more powerful CPU may not outweigh the additional cost. Below is a list of supported hardware, along with some notes on each.
Expand All @@ -30,7 +30,7 @@ When selecting a coprocessor, it is important to consider various factors, parti
* Raspberry Pi 4/5 ($55-$80)
* This is the recommended coprocessor for teams on a budget. It has a less powerful CPU than the Orange Pi 5, but is still capable of running PhotonVision at a reasonable FPS.
* Mini PCs (such as Beelink N5095)
* This coprcoessor will likely have similar performance to the Orange Pi 5 but has a higher performance ceiling (when using more powerful CPUs). Do note that this would require extra effort to wire to the robot / get set up. More information can be found in the set up guide `here. <https://docs.google.com/document/d/1lOSzG8iNE43cK-PgJDDzbwtf6ASyf4vbW8lQuFswxzw/edit?usp=drivesdk>`_
* This coprocessor will likely have similar performance to the Orange Pi 5 but has a higher performance ceiling (when using more powerful CPUs). Do note that this would require extra effort to wire to the robot / get set up. More information can be found in the set up guide `here. <https://docs.google.com/document/d/1lOSzG8iNE43cK-PgJDDzbwtf6ASyf4vbW8lQuFswxzw/edit?usp=drivesdk>`_
* Other coprocessors can be used but may require some extra work / command line usage in order to get it working properly.

Choosing a Camera
Expand All @@ -46,17 +46,17 @@ PhotonVision relies on `CSCore <https://github.com/wpilibsuite/allwpilib/tree/ma
.. note::
We do not currently support the usage of two of the same camera on the same coprocessor. You can only use two or more cameras if they are of different models or they are from Arducam, which has a `tool that allows for cameras to be renamed <https://docs.arducam.com/UVC-Camera/Serial-Number-Tool-Guide/>`_.

Reccomended Cameras
Recommended Cameras
^^^^^^^^^^^^^^^^^^^
For colored shape detection, any non-fisheye camera supported by PhotonVision will work. We reccomend the Pi Camera V1 or a high fps USB camera.
For colored shape detection, any non-fisheye camera supported by PhotonVision will work. We recommend the Pi Camera V1 or a high fps USB camera.

For driver camera, we reccomend a USB camera with a fisheye lens, so your driver can see more of the field.
For driver camera, we recommend a USB camera with a fisheye lens, so your driver can see more of the field.

For AprilTag detection, we reccomend you use a global shutter camera that has ~100 degree diagonal FOV. This will allow you to see more AprilTags in frame, and will allow for more accurate pose estimation. You also want a camera that supports high FPS, as this will allow you to update your pose estimator at a higher frequency.
For AprilTag detection, we recommend you use a global shutter camera that has ~100 degree diagonal FOV. This will allow you to see more AprilTags in frame, and will allow for more accurate pose estimation. You also want a camera that supports high FPS, as this will allow you to update your pose estimator at a higher frequency.

* Reccomendations For AprilTag Detection
* Recommendations For AprilTag Detection
* Arducam USB OV9281
* This is the reccomended camera for AprilTag detection as it is a high FPS, global shutter camera USB camera that has a ~70 degree FOV.
* This is the recommended camera for AprilTag detection as it is a high FPS, global shutter camera USB camera that has a ~70 degree FOV.
* Innomaker OV9281
* Spinel AR0144
* Pi Camera Module V1
Expand Down
2 changes: 1 addition & 1 deletion source/docs/installation/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This page will help you install PhotonVision on your coprocessor, wire it, and p
Step 1: Software Install
------------------------

This section will walk you through how to install PhotonVision on your coprcoessor. Your coprocessor is the device that has the camera and you are using to detect targets (ex. if you are using a Limelight / Raspberry Pi, that is your coprocessor and you should follow those instructions).
This section will walk you through how to install PhotonVision on your coprocessor. Your coprocessor is the device that has the camera and you are using to detect targets (ex. if you are using a Limelight / Raspberry Pi, that is your coprocessor and you should follow those instructions).

.. warning:: You only need to install PhotonVision on the coprocessor/device that is being used to detect targets, you do NOT need to install it on the device you use to view the webdashboard. All you need to view the webdashboard is for a device to be on the same network as your vision coprocessor and an internet browser.

Expand Down
4 changes: 2 additions & 2 deletions source/docs/objectDetection/about-object-detection.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ For the 2024 season, PhotonVision ships with a **pre-trained NOTE detector** (sh
Tracking Objects
^^^^^^^^^^^^^^^^

Before you get started with object detection, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and switch to the “Object Detection” type. You should see a screen similar to the image above.
Before you get started with object detection, ensure that you have followed the previous sections on installation, wiring, and networking. Next, open the Web UI, go to the top right card, and switch to the “Object Detection” type. You should see a screen similar to the image above.

PhotonVision currently ships with a NOTE detector based on a `YOLOv5 model <https://docs.ultralytics.com/yolov5/>`_. This model is trained to detect one or more object "classes" (such as cars, stoplights, or in our case, NOTES) in an input image. For each detected object, the model outputs a bounding box around where in the image the object is located, what class the object belongs to, and a unitless confidence between 0 and 1.

.... note:: This model output means that while its fairly easy to say that "this rectangle probably contains a NOTE", we doesn't have any information about the NOTE's orientation or location. Further math in user code would be required to make estimates about where an object is physically located relative to the camera.
.... note:: This model output means that while its fairly easy to say that "this rectangle probably contains a NOTE", we don't have any information about the NOTE's orientation or location. Further math in user code would be required to make estimates about where an object is physically located relative to the camera.

Tuning and Filtering
^^^^^^^^^^^^^^^^^^^^
Expand Down
2 changes: 1 addition & 1 deletion source/docs/troubleshooting/common-errors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Camera won't show up
^^^^^^^^^^^^^^^^^^^^
Try these steps to :ref:`troubleshoot your camera connection <docs/troubleshooting/camera-troubleshooting:Camera Troubleshooting>`.

If you are using a USB camera, it is possible your USB Camera isn't supported by CSCore and therefore won't work with PhotonVision. See :ref:`supported hardware page for more information <docs/hardware/selecting-hardware:Reccomended Cameras>`, or the above Camera Troubleshooting page for more information on determining this locally.
If you are using a USB camera, it is possible your USB Camera isn't supported by CSCore and therefore won't work with PhotonVision. See :ref:`supported hardware page for more information <docs/hardware/selecting-hardware:Recommended Cameras>`, or the above Camera Troubleshooting page for more information on determining this locally.

Camera is consistently returning incorrect values when in 3D mode
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down

0 comments on commit a5988b4

Please sign in to comment.