Skip to content
This repository has been archived by the owner on Jul 16, 2024. It is now read-only.

Commit

Permalink
Merge branch 'master' into LL3-config
Browse files Browse the repository at this point in the history
  • Loading branch information
DeltaDizzy authored Feb 24, 2024
2 parents f8f3f68 + f1b7c77 commit d77010b
Show file tree
Hide file tree
Showing 4 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion source/docs/apriltag-pipelines/multitag.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
MultiTag Localization
=====================

PhotonVision can combine AprilTag detections from multiple simultaniously observed AprilTags from a particular camera wih information about where tags are expected to be located on the field to produce a better estimate of where the camera (and therefore robot) is located on the field. PhotonVision can calculate this multi-target result on your coprocessor, reducing CPU usage on your RoboRio. This result is sent over NetworkTables along with other detected targets as part of the ``PhotonPipelineResult`` provided by PhotonLib.
PhotonVision can combine AprilTag detections from multiple simultaniously observed AprilTags from a particular camera with information about where tags are expected to be located on the field to produce a better estimate of where the camera (and therefore robot) is located on the field. PhotonVision can calculate this multi-target result on your coprocessor, reducing CPU usage on your RoboRio. This result is sent over NetworkTables along with other detected targets as part of the ``PhotonPipelineResult`` provided by PhotonLib.

.. warning:: MultiTag requires an accurate field layout JSON be uploaded! Differences between this layout and tag's physical location will drive error in the estimated pose output.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Installing Python Dependencies
------------------------------
You must install a set of Python dependencies in order to build the documentation. To do so, you can run the following command in the root project directory:

``pip install -r requirements.txt``
``python -m pip install -r requirements.txt``

Building the Documentation
--------------------------
Expand Down
4 changes: 2 additions & 2 deletions source/docs/programming/photonlib/getting-target-data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ The ``PhotonCamera`` class has two constructors: one that takes a ``NetworkTable

.. code-block:: python
# Change this to match the name of your camera
self.camera = PhotonCamera("photonvision")
# Change this to match the name of your camera as shown in the web ui
self.camera = PhotonCamera("your_camera_name_here")
.. warning:: Teams must have unique names for all of their cameras regardless of which coprocessor they are attached to.
Expand Down
6 changes: 3 additions & 3 deletions source/index.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. image:: assets/PhotonVision-Header-onWhite.png
:alt: PhotonVision

Welcome to the official documentation of PhotonVision! PhotonVision is the free, fast, and easy-to-use vision processing solution for the *FIRST*\ Robotics Competition. PhotonVision is designed to get vision working on your robot *quickly*, without the significant cost of other similar solutions. PhotonVision supports a variety of COTS hardware, including the Raspberry Pi 3 and 4, the `Gloworm smart camera <https://photonvision.github.io/gloworm-docs/docs/quickstart/#finding-gloworm>`_, and the `SnakeEyes Pi hat <https://www.playingwithfusion.com/productview.php?pdid=133>`_.
Welcome to the official documentation of PhotonVision! PhotonVision is the free, fast, and easy-to-use vision processing solution for the *FIRST*\ Robotics Competition. PhotonVision is designed to get vision working on your robot *quickly*, without the significant cost of other similar solutions. PhotonVision supports a variety of COTS hardware, including the Raspberry Pi 3 and 4, the `Gloworm smart camera <https://photonvision.github.io/gloworm-docs/docs/quickstart/#finding-gloworm>`_, the `SnakeEyes Pi hat <https://www.playingwithfusion.com/productview.php?pdid=133>`_, and the Orange Pi 5.

Content
-------
Expand Down Expand Up @@ -32,15 +32,15 @@ Content
:link: docs/examples/index
:link-type: doc

View various step by step guides on how to use data from PhotonVision in your code, along with a game-specific example.
View various step by step guides on how to use data from PhotonVision in your code, along with game-specific examples.

.. grid:: 2

.. grid-item-card:: Hardware
:link: docs/hardware/index
:link-type: doc

Select appropriate hardware for high-quality, easy vision target detection.
Select appropriate hardware for high-quality and easy vision target detection.

.. grid-item-card:: Contributing
:link: docs/contributing/index
Expand Down

0 comments on commit d77010b

Please sign in to comment.