diff --git a/source/_static/assets/simaimandrange.mp4 b/source/_static/assets/simaimandrange.mp4 index a4815408..12fcbfd8 100644 Binary files a/source/_static/assets/simaimandrange.mp4 and b/source/_static/assets/simaimandrange.mp4 differ diff --git a/source/_static/assets/swervedriveposeestsim.mp4 b/source/_static/assets/swervedriveposeestsim.mp4 new file mode 100644 index 00000000..eb13e0de Binary files /dev/null and b/source/_static/assets/swervedriveposeestsim.mp4 differ diff --git a/source/docs/contributing/photonvision/build-instructions.rst b/source/docs/contributing/photonvision/build-instructions.rst index 92265dff..9b0f5e1b 100644 --- a/source/docs/contributing/photonvision/build-instructions.rst +++ b/source/docs/contributing/photonvision/build-instructions.rst @@ -230,19 +230,11 @@ The program will wait for the VSCode debugger to attach before proceeding. Running examples ~~~~~~~~~~~~~~~~ -You can run one of the many built in examples straight from the command line, too! They contain a fully featured robot project, and some include simulation support. The projects can be found inside the photonlib-java-examples and photonlib-cpp-examples subdirectories, respectively. The projects currently available include: - -- photonlib-java-examples: - - aimandrange:simulateJava - - aimattarget:simulateJava - - getinrange:simulateJava - - simaimandrange:simulateJava - - simposeest:simulateJava -- photonlib-cpp-examples: - - aimandrange:simulateNative - - getinrange:simulateNative - -To run them, use the commands listed below. Photonlib must first be published to your local maven repository, then the copyPhotonlib task will copy the generated vendordep json file into each example. After that, the simulateJava/simulateNative task can be used like a normal robot project. Robot simulation with attached debugger is technically possible by using simulateExternalJava and modifying the launch script it exports, though unsupported. +You can run one of the many built in examples straight from the command line, too! They contain a fully featured robot project, and some include simulation support. + +The Java and C++ examples can be found inside the `photonlib-java-examples `_ and `photonlib-cpp-examples `_ subdirectories of the photonvision repository, respectively. + +To run them, use the commands listed below. Photonlib must first be published to your local maven repository, then the ``copyPhotonlib`` task will copy the generated vendordep json file into each example. After that, the ``simulateJava`` (Java) or ``simulateNative`` (C++) task can be used like a normal robot project. Robot simulation with attached debugger is technically possible by using ``simulateExternalJava`` and modifying the launch script it exports, though unsupported. .. code-block:: diff --git a/source/docs/examples/index.rst b/source/docs/examples/index.rst index b7407489..5ff92dd0 100644 --- a/source/docs/examples/index.rst +++ b/source/docs/examples/index.rst @@ -8,4 +8,4 @@ Code Examples gettinginrangeofthetarget aimandrange simaimandrange - simposeest + swervedriveposeestsim diff --git a/source/docs/examples/simaimandrange.rst b/source/docs/examples/simaimandrange.rst index db20413a..bc6d6df5 100644 --- a/source/docs/examples/simaimandrange.rst +++ b/source/docs/examples/simaimandrange.rst @@ -1,94 +1,143 @@ Simulating Aiming and Getting in Range ====================================== -The following example comes from the PhotonLib example repository (`Java `_/`C++ `_). Full code is available at those links. +The following example comes from the PhotonLib example repository (`Java `_). Full code is available at that link. +.. raw:: html -Knowledge and Equipment Needed ------------------------------------------------ + -- Everything required in :ref:`Combining Aiming and Getting in Range `. +.. attention:: A C++ example does not currently exist. Background ---------- -The previous examples show how to run PhotonVision on a real robot, with a physical robot drivetrain moving around and interacting with the software. - -This example builds upon that, adding support for simulating robot motion and incorporating that motion into a :code:`SimVisionSystem`. This allows you to test control algorithms on your development computer, without requiring access to a real robot. +The previous examples show how to use PhotonVision on a real robot, with the robot code making use of PhotonVision data published by a coprocessor to move a physical drivetrain. -.. raw:: html - - +This example showcases simulation support added to the previous :ref:`docs/examples/aimandrange:combining aiming and getting in range` example. This means both the physical drivetrain and PhotonVision data can be simulated on your development computer, and you can test your robot code without a real robot. See :ref:`docs/programming/photonlib/simulation:simulation support in photonlib` for more info on PhotonVision simulation. Walkthrough ----------- -First, in the main :code:`Robot` source file, we add support to periodically update a new simulation-specific object. This logic only gets used while running in simulation: +Defining used hardware +^^^^^^^^^^^^^^^^^^^^^^ + +Inheriting from the ``aimandrange`` example, we have some basic setup in our ``Robot`` class: .. tab-set-code:: - .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/ebef19af3d926cf87292177c9a16d01b71219306/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/Robot.java + .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/v2024.1.1-beta-1/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/Robot.java :language: java - :lines: 118-128 + :lines: 46-59 :linenos: - :lineno-start: 118 + :lineno-start: 46 -Then, we add in the implementation of our new `DrivetrainSim` class. Please reference the `WPILib documentation on physics simulation `_. +In the ``Robot`` class, we also add support to periodically update new simulation-specific objects. This logic only gets used while running in simulation, and is where we will handle simulating the field, robot, and camera: -Simulated Vision support is added with the following steps: +.. tab-set-code:: + + .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/v2024.1.1-beta-1/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/Robot.java + :language: java + :lines: 108-124 + :linenos: + :lineno-start: 108 -Creating the Simulated Vision System -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Simulating the Drivetrain +^^^^^^^^^^^^^^^^^^^^^^^^^ -First, we create a new :code:`SimVisionSystem` to represent our camera and coprocessor running PhotonVision. +We implement our new ``DrivetrainSim`` class so we can drive the robot in simulation. Please reference the `WPILib documentation on physics simulation `_. + +This drivetrain simulation is defined by the properties provided in the ``Constants`` class: + +.. tab-set-code:: + + .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/v2024.1.1-beta-1/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/Constants.java + :language: java + :lines: 73-90 + :linenos: + :lineno-start: 73 + +To put it simply, this class will take in the drivetrain inputs (the percentage outputs commanded to the left and right side motors of our differential drivetrain) and simulate the drivetrain dynamics, or how it should respond. .. tab-set-code:: - .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/ebef19af3d926cf87292177c9a16d01b71219306/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/sim/DrivetrainSim.java + .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/v2024.1.1-beta-1/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/sim/DrivetrainSim.java :language: java - :lines: 73-93 + :lines: 72-90 :linenos: :lineno-start: 72 -Next, we create objects to represent the physical location and size of the vision targets we are calibrated to detect. This example models the down-field high goal vision target from the 2020 and 2021 games. +Simulating the Vision System +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The ``VisionSim`` class will handle simulating the vision targets on the field and what our camera should see, as well as publishing data to NetworkTables to mimic an actual coprocessor running PhotonVision. For more information on PhotonVision simulation, see :ref:`docs/programming/photonlib/simulation:simulation support in photonlib`. + +This class revolves around a ``VisionSystemSim`` and ``PhotonCameraSim``. These handle simulating the field and camera data, respectively. .. tab-set-code:: - .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/ebef19af3d926cf87292177c9a16d01b71219306/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/sim/DrivetrainSim.java + .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/v2024.1.1-beta-1/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/sim/VisionSim.java :language: java - :lines: 95-111 + :lines: 77-80 :linenos: - :lineno-start: 95 + :lineno-start: 77 -Finally, we add our target to the simulated vision system. +We'll start by modeling the shape of the vision target we will put on the field (the 2020 High Goal target): .. tab-set-code:: - .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/ebef19af3d926cf87292177c9a16d01b71219306/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/sim/DrivetrainSim.java + .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/v2024.1.1-beta-1/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/sim/VisionSim.java :language: java - :lines: 116-117 + :lines: 52-62 :linenos: - :lineno-start: 113 + :lineno-start: 52 + +`...` and create a ``VisionTargetSim`` with where the target is on the field, which will be put in the ``VisionSystemSim``: + +.. tab-set-code:: + .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/v2024.1.1-beta-1/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/sim/VisionSim.java + :language: java + :lines: 82-86 + :linenos: + :lineno-start: 82 -If you have additional targets you want to detect, you can add them in the same way as the first one. +Now, we can create our camera simulation to view the simulated field. The camera simulation is defined by the given properties: +.. tab-set-code:: -Updating the Simulated Vision System -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/v2024.1.1-beta-1/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/sim/VisionSim.java + :language: java + :lines: 64-75 + :linenos: + :lineno-start: 64 -Once we have all the properties of our simulated vision system defined, the work to do at runtime becomes very minimal. Simply pass in the robot's pose periodically to the simulated vision system. +`...` and added to the ``VisionSystemSim``. The ``Transform3d`` used describes where the camera is on the robot. .. tab-set-code:: - .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/ebef19af3d926cf87292177c9a16d01b71219306/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/sim/DrivetrainSim.java + .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/v2024.1.1-beta-1/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/sim/VisionSim.java :language: java - :lines: 124-142 + :lines: 88-104 :linenos: - :lineno-start: 122 + :lineno-start: 88 + +Viewing the Simulation World +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Once we have all the properties of our simulated drivetrain and vision system defined, the work to do at runtime becomes very minimal. As mentioned at the start, we simply pass in the simulated robot's pose periodically to the simulated vision system in the ``Robot`` class: + +.. tab-set-code:: + .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/v2024.1.1-beta-1/photonlib-java-examples/simaimandrange/src/main/java/frc/robot/Robot.java + :language: java + :lines: 108-124 + :linenos: + :lineno-start: 108 The rest is done behind the scenes. + +Simulating the project will open the simgui tool, where a Field2d shows a top-down view of the robot, camera, and vision target poses. The camera stream is also simulated and made available similar to an actual coprocessor running PhotonVision. This can be seen in Shuffleboard or a browser (for our single simulated camera, the input stream should be at ``localhost:1181`` and output stream at ``localhost:1182``). Both of these are showcased in the video at the top of this page. diff --git a/source/docs/examples/simposeest.rst b/source/docs/examples/simposeest.rst deleted file mode 100644 index 9c4492d1..00000000 --- a/source/docs/examples/simposeest.rst +++ /dev/null @@ -1,132 +0,0 @@ -Using WPILib Pose Estimation, Simulation, and PhotonVision Together -=================================================================== - -The following example comes from the PhotonLib example repository (`Java `_). Full code is available at that links. - -Knowledge and Equipment Needed ------------------------------------------------ - -- Everything required in :ref:`Combining Aiming and Getting in Range `, plus some familiarity with WPILib pose estimation functionality. - -Background ----------- - -This example builds upon WPILib's `Differential Drive Pose Estimator `_. It adds a :code:`PhotonCamera` to gather estimates of the robot's position on the field. This in turn can be used for aligning with vision targets, and increasing accuracy of autonomous routines. - -To support simulation, a :code:`SimVisionSystem` is used to drive data into the :code:`PhotonCamera`. The far high goal target from 2020 is modeled. - -Walkthrough ------------ - -WPILib's :code:`Pose2d` class is used to represent robot positions on the field. - -Three different :code:`Pose2d` positions are relevant for this example: - -1) Desired Pose: The location some autonomous routine wants the robot to be in. -2) Estimated Pose: The location the software `believes` the robot to be in, based on physics models and sensor feedback. -3) Actual Pose: The locations the robot is actually at. The physics simulation generates this in simulation, but it cannot be directly measured on the real robot. - -Estimating Pose -^^^^^^^^^^^^^^^ - -The :code:`DrivetrainPoseEstimator` class is responsible for generating an estimated robot pose using sensor readings (including PhotonVision). - -Please reference the `WPILib documentation `_ on using the :code:`DifferentialDrivePoseEstimator` class. - -For both simulation and on-robot code, we create objects to represent the physical location and size of the vision targets we are calibrated to detect. This example models the down-field high goal vision target from the 2020 and 2021 games. - -.. tab-set:: - - .. tab-item:: Java - :sync: java - - .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/80e16ece87c735e30755dea271a56a2ce217b588/photonlib-java-examples/simposeest/src/main/java/frc/robot/Constants.java - :language: java - :lines: 83-106 - :linenos: - :lineno-start: 83 - - -To incorporate Photon Vision, we need to create a :code:`PhotonCamera`: - -.. tab-set:: - - .. tab-item:: Java - :sync: java - - .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/80e16ece87c735e30755dea271a56a2ce217b588/photonlib-java-examples/simposeest/src/main/java/frc/robot/DrivetrainPoseEstimator.java - :language: java - :lines: 46 - :linenos: - :lineno-start: 46 - -During periodic execution, we read back camera results. If we see a target in the image, we pass the camera-measured pose of the robot to the :code:`DifferentialDrivePoseEstimator`. - -.. tab-set:: - - .. tab-item:: Java - :sync: java - - .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/80e16ece87c735e30755dea271a56a2ce217b588/photonlib-java-examples/simposeest/src/main/java/frc/robot/DrivetrainPoseEstimator.java - :language: java - :lines: 81-92 - :linenos: - :lineno-start: 81 - - -That's it! - -Simulating the Camera -^^^^^^^^^^^^^^^^^^^^^ - -First, we create a new :code:`SimVisionSystem` to represent our camera and coprocessor running PhotonVision. - -.. tab-set:: - - .. tab-item:: Java - :sync: java - - .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/80e16ece87c735e30755dea271a56a2ce217b588/photonlib-java-examples/simposeest/src/main/java/frc/robot/DrivetrainSim.java - :language: java - :lines: 76-95 - :linenos: - :lineno-start: 76 - - -Then, we add our target to the simulated vision system. - -.. tab-set:: - - .. tab-item:: Java - :sync: java - - .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/80e16ece87c735e30755dea271a56a2ce217b588/photonlib-java-examples/simposeest/src/main/java/frc/robot/DrivetrainSim.java - :lines: 97-99 - :linenos: - :lineno-start: 97 - - -If you have additional targets you want to detect, you can add them in the same way as the first one. - - -Updating the Simulated Vision System -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Once we have all the properties of our simulated vision system defined, the remaining work is minimal. Periodically, pass in the robot's pose to the simulated vision system. - -.. tab-set:: - - .. tab-item:: Java - :sync: java - - .. rli:: https://raw.githubusercontent.com/PhotonVision/photonvision/80e16ece87c735e30755dea271a56a2ce217b588/photonlib-java-examples/simposeest/src/main/java/frc/robot/DrivetrainSim.java - :language: java - :lines: 138-139 - :linenos: - :lineno-start: 138 - - -The rest is done behind the scenes. - - - diff --git a/source/docs/examples/swervedriveposeestsim.rst b/source/docs/examples/swervedriveposeestsim.rst new file mode 100644 index 00000000..2df519a7 --- /dev/null +++ b/source/docs/examples/swervedriveposeestsim.rst @@ -0,0 +1,55 @@ +Simulating Swerve Drive Pose Estimation +======================================= + +The following example comes from the PhotonLib example repository (`Java `_). Full code is available at that link. + +.. raw:: html + + + +.. attention:: A C++ example does not currently exist. For a simple pose estimation example in C++ (without sim), see `apriltagExample `_. + +Background +---------- + +Starting in 2023, :ref:`docs/getting-started/april-tags:apriltags` were added to the FRC field to aid in vision localization. AprilTags can greatly improve the accuracy of pose estimation for teams, which expands autonomous capabilities. This example aims to demonstrate how pose estimation might be done on a swerve drivetrain using PhotonVision for AprilTag detection. For more information on pose estimation, see `Pose Estimators `_. + +The previous non-simulation examples show how to use PhotonVision on a real robot, with the robot code making use of PhotonVision data published by a coprocessor to move a physical drivetrain. In addition to showcasing pose estimation on a swerve drivetrain, this example shows how all of this can be simulated on your development computer using PhotonLib to get an idea of real-world performance in various scenarios. See :ref:`docs/programming/photonlib/simulation:simulation support in photonlib` for more info on PhotonVision simulation. + +Walkthrough +----------- + +Project Structure +^^^^^^^^^^^^^^^^^ + +As a minimal example, this project is a simple ``TimedRobot`` without any command-based functionality used. + +The ``SwerveDrive`` Class +~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``SwerveDrive`` class contains all the high-level controls and measurements for the swerve drivetrain. It's also where we will track the estimated robot pose with WPILib's ``SwerveDrivePoseEstimator`` and accept vision measurements. + +Low-level control is accomplished through the ``SwerveModule`` class, which represents the hypothetical swerve drive's hardware with ``PWMSparkMax`` for motor controllers and ``Encoder`` for encoders. These WPILib classes are used for simplicity over any vendor's library. + +The ``Vision`` Class +~~~~~~~~~~~~~~~~~~~~ + +The ``Vision`` class manages vision data from our coprocessor running PhotonVision. The main functionality of this class is through the ``PhotonPoseEstimator``, which provides pose estimates of the robot based on AprilTags seen by the camera. + +The ``Robot`` Class +~~~~~~~~~~~~~~~~~~~ + +Simulation +^^^^^^^^^^ + +Simulating the Swerve Drive +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Simulating the Vision System +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Viewing the Simulation World +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/docs/simulation/simulation.rst b/source/docs/simulation/simulation.rst index 8b578085..e39307e1 100644 --- a/source/docs/simulation/simulation.rst +++ b/source/docs/simulation/simulation.rst @@ -96,7 +96,7 @@ For convenience, an ``AprilTagFieldLayout`` can also be added to automatically c .. code-block:: java // The layout of AprilTags which we want to add to the vision system - AprilTagFieldLayout tagLayout = AprilTagFieldLayout.loadFromResource(AprilTagFields.k2024Crescendo.m_resourceFile); + AprilTagFieldLayout tagLayout = AprilTagFields.kDefaultField.loadAprilTagLayoutField(); visionSim.addAprilTags(tagLayout);