Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Time-Averaged Field Diagnostics #5285

Open
wants to merge 19 commits into
base: development
Choose a base branch
from

Conversation

n01r
Copy link
Member

@n01r n01r commented Sep 18, 2024

This PR adds time-averaged field diagnostics to the WarpX output

  • code
  • docs
  • tests
  • example
  • meta-data $\Longrightarrow$ Follow-up PR
  • make compatible with adaptive time stepping $\Longrightarrow$ Follow-up PR

Currently, compiling for GPU creates a linking error. We would like to see the CI output to possibly find the issue.
Hence, we do not flag this as WIP right now.

This PR is based on work performed during the 2024 WarpX Refactoring Hackathon and was created together with @RevathiJambunathan. 🙂

Successfully merging this pull request may close #5165.

@n01r n01r added hackathon Let's address this topic during the GPU hackathon component: diagnostics all types of outputs labels Sep 18, 2024
@archermarx
Copy link
Contributor

Thanks for making this PR! One complication is that with #5176, adaptive timestepping will be possible. This means that the timestep length can change, so simply summing the multifabs and dividing by the period won't be enough for a proper time average. I don't think addressing this will be too hard, however.

@n01r
Copy link
Member Author

n01r commented Sep 18, 2024

Thanks, @archermarx! I wasn't aware of that yet. 🙂 We can do a follow-up and catch if dt_update_interval is set. In that case, we could create a cumulative sum weighted with the time step length and also add up the total time as we go. During the flush, we then divide by the total time.

@archermarx
Copy link
Contributor

Thanks, @archermarx! I wasn't aware of that yet. 🙂 We can do a follow-up and catch if dt_update_interval is set. In that case, we could create a cumulative sum weighted with the time step length and also add up the total time as we go. During the flush, we then divide by the total time.

Yep, my thought exactly. The adaptive timestepping PR was just merged, too.

@EZoni
Copy link
Member

EZoni commented Sep 19, 2024

I think #5296 should fix the clang-tidy errors that remain, which don't seem related to this PR.

@ax3l
Copy link
Member

ax3l commented Sep 23, 2024

We can do a follow-up and catch if dt_update_interval is set
...
Yep, my thought exactly. The adaptive timestepping PR was just merged, too.

Thanks for the catch! Please add a runtime assert in this PR now.

@ax3l ax3l self-assigned this Sep 23, 2024
@n01r
Copy link
Member Author

n01r commented Sep 24, 2024

Added commit that will cause the sim to abort when dt_update_interval is set.

The last thing for this PR that remains from our list is the CI tests.

@ax3l, should we also use the recently merged MultiFabRegister?

@n01r
Copy link
Member Author

n01r commented Oct 2, 2024

Just empirically, I tried this in our Laser-Ion Acceleration example. At step 1000, I compared between the electron density output from the in-situ generated averaged output and a post-processed output that I created from 25 instantaneous outputs. The relative difference is on the order of 1e-16. 🎉
Figure 33

Created with the following input script:
inputs_test_2d_laser_ion_acc.txt

@n01r
Copy link
Member Author

n01r commented Oct 2, 2024

And comparing instantaneous output with averaged output for, e.g. Ez shows the much clearer structure one can see with this diagnostic.
image

@RemiLehe RemiLehe assigned EZoni and unassigned ax3l Oct 16, 2024
@EZoni
Copy link
Member

EZoni commented Oct 16, 2024

@n01r @RevathiJambunathan
After resolving the merge conflicts, do you think the feature added in this PR is complete and ready to go, pending review?

@n01r
Copy link
Member Author

n01r commented Oct 16, 2024

@EZoni, the CI tests are still missing. I was thinking of adding these today since Perlmutter is under maintenance anyway.

@@ -20,3 +30,14 @@ add_warpx_test(
diags/diagInst/ # output
OFF # dependency
)

# We do not have an intervals parser for PICMI yet which would accept more than a single integer for the output period
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should think about implementing an intervals parser for our PICMI diagnostics which can handle more than a single integer. It should be able to do the same as our app input: https://warpx.readthedocs.io/en/latest/usage/parameters.html#intervals-parser

RevathiJambunathan and others added 7 commits October 18, 2024 15:10
- Implement warnings and errors for certain inputs concerning averaging periods
- added first documentation on time averaged diags
- put more operations on summation multifabs into if-environments
n01r and others added 9 commits October 18, 2024 15:17
- added time averaged diags to laser-ion acceleration example
- fix first issues that came up when testing this
Currently, we do not yet support the newly implemented adaptive time-stepping
mode of electrostatic solvers together with time-averaged diagnostics.
Even though the laser ion test is named as a dependency it is running this
test again. That should ideally be avoided. It would be good to just run
the analysis script as test.
@n01r
Copy link
Member Author

n01r commented Oct 18, 2024

@EZoni, feel free to review :)

For PICMI, the TimeAveragedDiag test is automatically disabled because we
cannot simply define the necessary output intervals.
We need to be able to define period=[100,69:100] like for the app input.
Comment on lines +14 to +30
def analyze_openpmd_regression(output_file):
# Run checksum regression test
if re.search("single_precision", output_file):
evaluate_checksum(
test_name=test_name,
output_file=output_file,
output_format="openpmd",
rtol=2e-6,
)
else:
evaluate_checksum(
test_name=test_name,
output_file=output_file,
output_format="openpmd",
)


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want to change the way we handle single-precision tests anyways (see #5205), so if this test is not a single-precision test, I think we can remove this function and simply call evaluate_checksum below, consistently with the other analysis scripts in our test suite (since #5297):

Suggested change
def analyze_openpmd_regression(output_file):
# Run checksum regression test
if re.search("single_precision", output_file):
evaluate_checksum(
test_name=test_name,
output_file=output_file,
output_format="openpmd",
rtol=2e-6,
)
else:
evaluate_checksum(
test_name=test_name,
output_file=output_file,
output_format="openpmd",
)

Comment on lines +88 to +101
test_name = os.path.split(os.getcwd())[1]
inst_output_file = sys.argv[1]

# Regression checksum test
# NOTE: works only in the example directory due to relative path import
analyze_openpmd_regression(inst_output_file)

# TODO: implement intervals parser for PICMI that allows more complex output periods
if "picmi" not in test_name:
# Functionality test for TimeAveragedDiagnostics
compare_time_avg_with_instantaneous_diags(
dir_inst=sys.argv[1],
dir_avg="diags/diagTimeAvg/",
)
Copy link
Member

@EZoni EZoni Oct 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the comment above, I would rewrite it like this:

Suggested change
test_name = os.path.split(os.getcwd())[1]
inst_output_file = sys.argv[1]
# Regression checksum test
# NOTE: works only in the example directory due to relative path import
analyze_openpmd_regression(inst_output_file)
# TODO: implement intervals parser for PICMI that allows more complex output periods
if "picmi" not in test_name:
# Functionality test for TimeAveragedDiagnostics
compare_time_avg_with_instantaneous_diags(
dir_inst=sys.argv[1],
dir_avg="diags/diagTimeAvg/",
)
# NOTE: works only in the example directory due to relative path import
# compare checksums
evaluate_checksum(
test_name=os.path.split(os.getcwd())[1],
output_file=sys.argv[1],
)
# TODO: implement intervals parser for PICMI that allows more complex output periods
test_name = os.path.split(os.getcwd())[1]
if "picmi" not in test_name:
# Functionality test for TimeAveragedDiagnostics
compare_time_avg_with_instantaneous_diags(
dir_inst=sys.argv[1],
dir_avg="diags/diagTimeAvg/",
)

If possible, I would leave the evaluate_checksum call as in this suggestion (instead of passing, for example, test_name=test_name with a test_name variable defined before) because it matches the format of this "unit code block" across the test suite and will make it easier to implement some upcoming changes in a more automatic way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: diagnostics all types of outputs hackathon Let's address this topic during the GPU hackathon
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Diagnostics: Time-Averaged Fields
5 participants