-
Notifications
You must be signed in to change notification settings - Fork 191
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'development' into add_1d_cylinder
- Loading branch information
Showing
48 changed files
with
1,029 additions
and
316 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,32 +1,36 @@ | ||
.. _developers-checksum: | ||
|
||
Checksum regression tests | ||
========================= | ||
Checksums on Tests | ||
================== | ||
|
||
WarpX has checksum regression tests: as part of CI testing, when running a given test, the checksum module computes one aggregated number per field (``Ex_checksum = np.sum(np.abs(Ex))``) and compares it to a reference (benchmark). This should be sensitive enough to make the test fail if your PR causes a significant difference, print meaningful error messages, and give you a chance to fix a bug or reset the benchmark if needed. | ||
When running an automated test, we often compare the data of final time step of the test with expected values to catch accidental changes. | ||
Instead of relying on reference files that we would have to store in their full size, we calculate an aggregate checksum. | ||
|
||
The checksum module is located in ``Regression/Checksum/``, and the benchmarks are stored as human-readable `JSON <https://www.json.org/json-en.html>`__ files in ``Regression/Checksum/benchmarks_json/``, with one file per benchmark (for instance, test ``Langmuir_2d`` has a corresponding benchmark ``Regression/Checksum/benchmarks_json/Langmuir_2d.json``). | ||
For this purpose, the checksum Python module computes one aggregated number per field (e.g., the sum of the absolute values of the array elements) and compares it to a reference value (benchmark). | ||
This should be sensitive enough to make the test fail if your PR causes a significant difference, print meaningful error messages, and give you a chance to fix a bug or reset the benchmark if needed. | ||
|
||
For more details on the implementation, the Python files in ``Regression/Checksum/`` should be well documented. | ||
The checksum module is located in ``Regression/Checksum/``, and the benchmarks are stored as human-readable `JSON <https://www.json.org/json-en.html>`__ files in ``Regression/Checksum/benchmarks_json/``, with one file per benchmark (for example, the test ``test_2d_langmuir_multi`` has a corresponding benchmark ``Regression/Checksum/benchmarks_json/test_2d_langmuir_multi.json``). | ||
|
||
From a user point of view, you should only need to use ``checksumAPI.py``. It contains Python functions that can be imported and used from an analysis Python script. It can also be executed directly as a Python script. Here are recipes for the main tasks related to checksum regression tests in WarpX CI. | ||
For more details on the implementation, please refer to the Python implementation in ``Regression/Checksum/``. | ||
|
||
Include a checksum regression test in an analysis Python script | ||
--------------------------------------------------------------- | ||
From a user point of view, you should only need to use ``checksumAPI.py``, which contains Python functions that can be imported and used from an analysis Python script or can also be executed directly as a Python script. | ||
|
||
How to compare checksums in your analysis script | ||
------------------------------------------------ | ||
|
||
This relies on the function ``evaluate_checksum``: | ||
|
||
.. autofunction:: checksumAPI.evaluate_checksum | ||
|
||
For an example, see | ||
Here's an example: | ||
|
||
.. literalinclude:: ../../../Examples/analysis_default_regression.py | ||
.. literalinclude:: ../../../Examples/Tests/embedded_circle/analysis.py | ||
:language: python | ||
|
||
This can also be included in an existing analysis script. Note that the plotfile must be ``<test name>_plt?????``, as is generated by the CI framework. | ||
This can also be included as part of an existing analysis script. | ||
|
||
Evaluate a checksum regression test from a bash terminal | ||
-------------------------------------------------------- | ||
How to evaluate checksums from the command line | ||
----------------------------------------------- | ||
|
||
You can execute ``checksumAPI.py`` as a Python script for that, and pass the plotfile that you want to evaluate, as well as the test name (so the script knows which benchmark to compare it to). | ||
|
||
|
@@ -41,11 +45,8 @@ See additional options | |
* ``--rtol`` relative tolerance for the comparison | ||
* ``--atol`` absolute tolerance for the comparison (a sum of both is used by ``numpy.isclose()``) | ||
|
||
Create/Reset a benchmark with new values that you know are correct | ||
------------------------------------------------------------------ | ||
|
||
Create/Reset a benchmark from a plotfile generated locally | ||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ||
How to create or reset checksums with local benchmark values | ||
------------------------------------------------------------ | ||
|
||
This is using ``checksumAPI.py`` as a Python script. | ||
|
||
|
@@ -65,8 +66,8 @@ Since this will automatically change the JSON file stored on the repo, make a se | |
git add <test name>.json | ||
git commit -m "reset benchmark for <test name> because ..." --author="Tools <[email protected]>" | ||
Automated reset of a list of test benchmarks | ||
-------------------------------------------- | ||
How to reset checksums for a list of tests with local benchmark values | ||
---------------------------------------------------------------------- | ||
|
||
If you set the environment variable ``export CHECKSUM_RESET=ON`` before running tests that are compared against existing benchmarks, the test analysis will reset the benchmarks to the new values, skipping the comparison. | ||
|
||
|
@@ -80,8 +81,8 @@ With `CTest <https://cmake.org/cmake/help/latest/manual/ctest.1.html>`__ (coming | |
# ... check and commit changes ... | ||
Reset a benchmark from the Azure pipeline output on Github | ||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ||
How to reset checksums for a list of tests with benchmark values from the Azure pipeline output | ||
----------------------------------------------------------------------------------------------- | ||
|
||
Alternatively, the benchmarks can be reset using the output of the Azure continuous intergration (CI) tests on Github. The output can be accessed by following the steps below: | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.