Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Pass tests #38

Merged
merged 48 commits into from
Oct 18, 2023
Merged

WIP: Pass tests #38

merged 48 commits into from
Oct 18, 2023

Conversation

bprather
Copy link
Contributor

@bprather bprather commented Oct 9, 2023

This branch held all the regressions and messiness as I brought up all the tests. It reorganizes the parameter files as well.

Note it passes an EMHD modes/face CT test on AMD GPUs, so things are coming together!

Remaining TODO:

  • Pass bondi_viscous test (add/pass conducting_atmosphere?)
  • Pass or temporarily remove noh electrons test
  • Add/pass SMR MHD/EMHD modes tests by eliminating a weird artifact on outer edges of refined zones
  • Add an AMR test, pull Parthenon changes & ensure it still passes

Ben Prather and others added 30 commits September 28, 2023 14:22
Rename two very Athena-sounding callbacks more inline, add 'PostExecute'
The idea is to have every problem capable of either B field transport,
but this requires defining a function for A or B rather than setting it,
since you don't know where it lives.
This solution is not ideal, but I think it'll work okay
This finally organizes the different parameter files into folders.
Plenty of parfiles fit more than one category, feel free to move them.

Better parfile documentation is planned as we bring up the rest of
the tests.
* Try to make block and domain boundaries clear but
flexible. Now supports, between blocks:
sync cons
sync prims
sync prims (but under the hood sync cons for AMR)
and for domain boundaries:
prims are marked sync ->
    PtoU everything
cons,GRHD prims are marked sync ->
    UtoP except PtoU on MHD
cons,GRHD prims are marked sync ->
    UtoP everything
* Dirichlet boundary fixes
* More options to try inadvisable things on boundaries,
    & to record exactly what was applied
* Drivers now have a type
* Put in structure for limiting MPI sync vars
* Clearly deprecate B_CD
* Rename B_Cleanup -> general StartupOnly flag
* Don't allocate in current calc
This moves the rest of the problem initializations to using the
unified field init.  It also fixes some issues starting up problems when
magnetic field has been added, which may have affected the
starting internal energy of tori at times in the past (will check).

It also moves (restores) a few very specific/brittle parameter files
into their specific test dirs, rather than global 'pars' dir.
Fix run script, fix enabling flag_verbose in imex
Support running in bare KS coordinates via new trivial transform
(needed the check a little looser, turns out KS are not good coords)
* Restores EMHD terms to stress-energy tensor
    (no, I do not remember why they were removed in this branch...)
* Run script touch-ups for bondi_viscous but dP still not converging
Restarts now record cons by default, GRMHD prims only
Restarting is now binary-similar @5 steps
Also add back strict 'set -euo pipefail' to kill tests on nonzero
returns, and fix some more test scripts
As with the rest of KHARMA, B-field cleaning was ignoring the divergence
on corners which fell on the the polar faces.  This is because BiCGStab
wasn't applying any physical boundary conditions on phi during solving.

The easy way to get correct boundary conditions was to declare
the scalar field and RHS to be explicitly defined at corners/nodes, and
let Parthenon apply the default bounds for that case (as well as
applying our own boundaries to the intermediate dB when calculating the
Laplacian)

With the new boundaries, the "physical" domain for phi is now larger,
so bicgstab_solver had to be heavily modified. Since it's no longer
going to be upstreamed, I just forked it.
Not all flag names were updated "EMHD"->"EMHDVar", I suspect a merge
somewhere regressed that.
Also fix some more scripts
All MHD variables in bondi_viscous now converge as expected, and
boundaries are applied to dP as expected. Source term seems to be
much, much too large for some reason.

Also Vbump Kokkos to fix a CUDA segfault (again?)
Ben Prather added 18 commits October 9, 2023 17:31
Supporting exchanging primitive vars only for ImEx driver in non-AMR
had become a source of bugs, incl. last commit.  Fix by simplifying.

ImEx driver needs to be able to sync conserved variables anyway for AMR,
so better to keep the same codepath even at the cost of the occasional
UtoP/PtoU call.
These bugs would have appeared more inscrutably when we ran w/AMR anyway
A few commits back I removed sync_prims, reasoning conserved variables
should be prolongated/restricted, and could always be recovered from
each other.  Both points were wrong: primitive vars can be prolongated
and restricted on boundaries fine, though the latter is not ideal. And,
in EMHD, it is not straightforward to recover P from U, as this is
normally done inline with the computation of the next step's state.

This commit switches to syncing primitive variables (sync_prims) anytime
they're fundamental (ImEx and simple drivers, "prims_are_fundamental")
Note the "conserved" B field is *always* what is sync'd, regardless
of the other primitive or conserved variables.

It also avoids loading the inverter package if GRMHD is implicitly
evolved, and expands some computed domains related to B to work
at the last prolongation operator bug before AMR.
Viscous Bondi:
use outflow conditions, solve ODE from outer edge value
(converges to fixed condition, but not at 2o, probably due to
limited runtime)

Conducting atmo:
actually return check result, converge by fixing detection of
higher-order terms.
@bprather
Copy link
Contributor Author

Going to leave SMR/AMR tests for the future now that we have a baseline for compatibility with existing features

@bprather bprather merged commit c928fe7 into kharma-next Oct 18, 2023
2 checks passed
@bprather bprather deleted the fix/pass-tests branch December 8, 2023 19:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant