Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: ISMR #120

Open
wants to merge 72 commits into
base: dev
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
72 commits
Select commit Hold shift + click to select a range
87a8041
Break the dreaded Implicit::Step kernel into 3, to reduce register/sh…
Feb 27, 2024
a7c5506
Add some options to run under ncu, default Darwin to no MPI
Feb 27, 2024
4c68d9d
Substantially globalize implicit solve, potentially faster
Feb 27, 2024
88c033e
Avoiding segfaults is hard
Feb 27, 2024
780a413
Merge branch 'dev' into fix/implicit-speed
Apr 3, 2024
f935d6a
Update dev changes for removal of emhd_params bools
bprather Apr 3, 2024
7195438
Backport ISMR as package, impl. credit Hyerin Cho
bprather Jun 30, 2024
d195dcb
Merge branch 'fix/clean-outflow' into feature/ismr
Jul 1, 2024
ba36c43
Merge branch 'fix/polar-reconnection' of github:afd-illinois/kharma i…
Jul 2, 2024
8a10b52
Merge branch 'fix/polar-reconnection' of github:afd-illinois/kharma i…
Jul 2, 2024
5e03202
Merge branch 'fix/polar-reconnection' into feature/ismr
bprather Jul 3, 2024
327442f
Merge branch 'feature/avg_T3' into feature/ismr
bprather Jul 5, 2024
bbe4c8e
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
Jul 5, 2024
8e31556
Merge branch 'dev' into feature/ismr
bprather Jul 5, 2024
3ab1fac
Apply floors when subtracting avg U3/T3 as increasing speed can cool …
bprather Jul 9, 2024
7a27441
Absorb second UtoP into ISMR calls. *May* want floors?
bprather Jul 9, 2024
6b8efee
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Jul 9, 2024
c9c5229
Merge branch 'dev' into feature/ismr
bprather Jul 9, 2024
90527dd
Fix ISMR compile & ordering bugs
bprather Jul 9, 2024
664071e
Merge branch 'feature/avg_T3' into feature/ismr
bprather Jul 9, 2024
958f87f
Oops, pull floors *outside* kernel
bprather Jul 9, 2024
0788778
Merge branch 'feature/avg_T3' of github:afd-illinois/kharma into feat…
bprather Jul 9, 2024
29b92b7
Merge remote-tracking branch 'origin/feature/avg_T3' into feature/ismr
bprather Jul 9, 2024
931ec11
Reflect ISMR first-order Courant condition in timestep calc
bprather Jul 10, 2024
31be2b0
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Jul 10, 2024
1c3ad33
Merge remote-tracking branch 'origin/feature/restart-with-input' into…
bprather Jul 17, 2024
ab49c17
Merge branch 'fix/restart-emhd' into feature/ismr
bprather Jul 17, 2024
4c74b10
Finish porting original ISMR: linear recon at poles for consistency
bprather Jul 19, 2024
b8ff376
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Jul 19, 2024
c1cf1c9
Merge remote-tracking branch 'origin/fix/cancel-imex' into feature/ismr
bprather Jul 23, 2024
5e7ce03
Merge branch 'dev' into feature/ismr
bprather Aug 2, 2024
e2397d7
Merge branch 'fix/new-restarts' into feature/ismr
bprather Aug 5, 2024
42f734b
Merge branch 'fix/cancel-imex' into feature/ismr
bprather Aug 5, 2024
9e65989
Fix a type error in ISMR
bprather Aug 5, 2024
21b116a
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Aug 5, 2024
70d9dca
First attempt at ISMR for EMHD, average primitive vars
bprather Aug 6, 2024
46c934d
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Aug 6, 2024
bba0a30
ISMR with EMHD round 2: just use ideal?
bprather Aug 6, 2024
74b2e36
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Aug 6, 2024
50ec2f9
Merge recent dev into split-implicit branch
bprather Aug 8, 2024
20e03c7
Merge branch 'fix/implicit-speed' into feature/ismr
bprather Aug 8, 2024
ebdbfbc
Try to free U3/T3 averaging from requiring dt_light
bprather Aug 8, 2024
b15d022
ISMR: clean up after disastrous experiment averaging prims
bprather Aug 8, 2024
cadde44
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Aug 8, 2024
0360e2d
U3/T3 av: call correct functions to fix build
bprather Aug 8, 2024
955ceb6
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Aug 8, 2024
4f66839
Merge branch 'dev' into feature/ismr
bprather Aug 13, 2024
60532be
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Aug 13, 2024
18ce1e6
Merge branch 'dev' into feature/ismr
bprather Aug 15, 2024
8ba3aec
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Aug 15, 2024
75fee1d
Merge branch 'feature/excise-transmit' into feature/ismr
bprather Aug 21, 2024
3ce0db1
Merge branch 'feature/excise-transmit' into feature/ismr
bprather Aug 22, 2024
5a713bd
Merge branch 'stable' into feature/ismr
bprather Aug 28, 2024
9b3d3fb
Merge branch 'dev' into feature/ismr
bprather Aug 28, 2024
924f398
Re-delete old ctop recalculation now it's done in EstimateTimestep
bprather Aug 28, 2024
b480791
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Sep 3, 2024
a628400
Merge branch 'dev' into feature/ismr
bprather Sep 3, 2024
f38c6d1
Fix a merge issue
bprather Sep 3, 2024
a8bfd68
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Sep 6, 2024
09c30de
Merge branch 'feature/split-emhd' into feature/ismr
bprather Sep 6, 2024
872ad01
Merge remote-tracking branch 'origin/fix/update-parameters' into feat…
bprather Sep 9, 2024
781bd51
Merge remote-tracking branch 'origin/fix/update-parameters' into feat…
bprather Sep 12, 2024
0efd8d3
Merge remote-tracking branch 'origin/fix/update-parameters' into feat…
bprather Sep 12, 2024
22a1be3
Merge remote-tracking branch 'origin/dev' into feature/ismr
bprather Sep 13, 2024
564f879
simplified indices for ISMR and reconstruction is only done linearly …
Oct 1, 2024
0146876
minor change to kharma_step.cpp
Oct 1, 2024
570a921
Merge branch 'stable' into feature/ismr
bprather Oct 1, 2024
bca3373
Merge pull request #125 from hyerincho/feature/ismr
bprather Oct 2, 2024
99b5b78
Merge branch 'feature/ismr' of github:afd-illinois/kharma into featur…
bprather Oct 2, 2024
a7892f0
Cleanup; only declare ISMR B field if enabled
bprather Oct 2, 2024
deb8008
bug fixed in ISMR
Oct 9, 2024
7d5c16e
Merge pull request #127 from hyerincho/feature/ismr
bprather Oct 9, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions kharma/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR}/floors EXE_NAME_SRC)
AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR}/grmhd EXE_NAME_SRC)
AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR}/implicit EXE_NAME_SRC)
AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR}/inverter EXE_NAME_SRC)
AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR}/ismr EXE_NAME_SRC)
AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR}/reductions EXE_NAME_SRC)
AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR}/emhd EXE_NAME_SRC)
AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR}/wind EXE_NAME_SRC)
Expand All @@ -59,6 +60,7 @@ include_directories(${CMAKE_CURRENT_SOURCE_DIR}/floors)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/grmhd)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/implicit)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/inverter)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/ismr)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/reductions)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/emhd)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/wind)
Expand Down
182 changes: 174 additions & 8 deletions kharma/b_ct/b_ct.cpp
Original file line number Diff line number Diff line change
@@ -1,25 +1,25 @@
/*
/*
* File: b_ct.cpp
*
*
* BSD 3-Clause License
*
*
* Copyright (c) 2020, AFD Group at UIUC
* All rights reserved.
*
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
*
* 3. Neither the name of the copyright holder nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
Expand Down Expand Up @@ -469,6 +469,172 @@ TaskStatus B_CT::AddSource(MeshData<Real> *md, MeshData<Real> *mdudt, IndexDomai
return TaskStatus::complete;
}

TaskStatus B_CT::DerefinePoles(MeshData<Real> *md)
{
// HYERIN (01/17/24) this routine is not general yet and only applies to polar boundaries for now.
auto pmesh = md->GetMeshPointer();
const uint nlevels = pmesh->packages.Get("ISMR")->Param<uint>("nlevels");

// Figure out indices
int ng = Globals::nghost;
for (auto &pmb : pmesh->block_list) {
const auto& G = pmb->coords;
auto& rc = pmb->meshblock_data.Get();
auto B_Uf = rc->PackVariables(std::vector<std::string>{"cons.fB"});
auto B_avg = rc->PackVariables(std::vector<std::string>{"ismr.fB_avg"});
for (int i = 0; i < BOUNDARY_NFACES; i++) {
BoundaryFace bface = (BoundaryFace) i;
auto bname = KBoundaries::BoundaryName(bface);
auto bdir = KBoundaries::BoundaryDirection(bface);
auto domain = KBoundaries::BoundaryDomain(bface);
auto binner = KBoundaries::BoundaryIsInner(bface);
if (bdir == X2DIR && pmb->boundary_flag[bface] == BoundaryFlag::user) {
// indices
// TODO also get ranges in cells from the beginning rather than using j_p & calculating j_c
IndexRange3 bCC = KDomain::GetRange(rc, IndexDomain::interior, CC);
IndexRange3 bF1 = KDomain::GetRange(rc, domain, F1, ng, -ng);
IndexRange3 bF3 = KDomain::GetRange(rc, domain, F3, ng, -ng);
const int j_f = (binner) ? bCC.js : bCC.je + 1; // last physical face
const int jps = (binner) ? j_f + (nlevels - 1) : j_f - (nlevels - 1); // start of the lowest level of derefinement
const IndexRange j_p = IndexRange{(binner) ? j_f : jps, (binner) ? jps : j_f}; // Range of x2 to be de-refined
const int offset = (binner) ? 1 : -1; // offset to read the physical face values
const int point_out = offset; // if F2 B field at j_f + offset face is positive when pointing out of the cell, +1.

// F1 average
pmb->par_for("B_CT_derefine_poles_avg_F1", bCC.ks, bCC.ke, j_p.s, j_p.e, bF1.is, bF1.ie,
KOKKOS_LAMBDA (const int &k, const int &j, const int &i) {
const int coarse_cell_len = m::pow(2, ((binner) ? jps - j : j - jps) + 1);
const int j_c = j + ((binner) ? 0 : -1); // cell center
const int k_fine = (k - ng) % coarse_cell_len; // this fine cell's k-index within the coarse cell
const int k_start = k - k_fine; // starting k-index of the coarse cell

// average over fine cells within the coarse cell we're in
Real avg = 0.;
for (int ktemp = 0; ktemp < coarse_cell_len; ++ktemp)
avg += B_Uf(F1, 0, k_start + ktemp, j_c, i) * G.Volume<F1>(k_start + ktemp, j_c, i);
avg /= coarse_cell_len;

B_avg(F1, 0, k, j_c, i) = avg;
}
);
// F2 average
pmb->par_for("B_CT_derefine_poles_avg_F2", bCC.ks, bCC.ke, j_p.s, j_p.e, bCC.is, bCC.ie,
KOKKOS_LAMBDA (const int &k, const int &j, const int &i) {
const int coarse_cell_len = m::pow(2, ((binner) ? jps - j : j - jps) + 1);
// fine cell's k index within the coarse cell
const int k_fine = (k - ng) % coarse_cell_len;
// starting k-index of the coarse cell
const int k_start = k - k_fine;

if (j == j_f) {
// The fine cells have 0 fluxes through the physical-ghost boundaries.
B_avg(F2, 0, k, j, i) = 0.;
} else { // average the fine cells
Real avg = 0.;
for (int ktemp = 0; ktemp < coarse_cell_len; ++ktemp)
avg += B_Uf(F2, 0, k_start + ktemp, j, i) * G.Volume<F2>(k_start + ktemp, j, i);
avg /= coarse_cell_len;

B_avg(F2, 0, k, j, i) = avg;
}
}
);
// F3 average
pmb->par_for("B_CT_derefine_poles_avg_F3", bF3.ks, bF3.ke, j_p.s, j_p.e, bCC.is, bCC.ie,
KOKKOS_LAMBDA (const int &k, const int &j, const int &i) {
// the current level of derefinement at given j
const int current_lv = ((binner) ? jps - j : j - jps);
// half of the coarse cell's length
const int c_half = m::pow(2, current_lv);
const int coarse_cell_len = 2 * c_half;
// cell center
const int j_c = j + ((binner) ? 0 : -1);
// this fine cell's k-index within the coarse cell
const int k_fine = (k - ng) % coarse_cell_len;
// starting k-index of the coarse cell
const int k_start = k - k_fine;
const int k_half = k_start + c_half;
// end k-index of the coarse cell
const int k_end = k_start + coarse_cell_len;

if ((k - ng) % coarse_cell_len == 0) {
// Don't modify faces of the coarse cells
B_avg(F3, 0, k, j_c, i) = B_Uf(F3, 0, k, j_c, i) * G.Volume<F3>(k, j_c, i);
} else {
// F3: The internal faces will take care of the divB=0. The two faces of the coarse cell will remain unchanged.
// First calculate the very central internal face. In other words, deal with the highest level internal face first.
// Sum of F2 fluxes in the left and right half of the coarse cell each.
Real c_left_v = 0., c_right_v = 0.;
for (int ktemp = 0; ktemp < c_half; ++ktemp) {
c_left_v += B_Uf(F2, 0, k_half - 1 - ktemp, j + offset, i) * G.Volume<F2>(k_half - 1 - ktemp, j + offset, i);
c_right_v += B_Uf(F2, 0, k_half + ktemp, j + offset, i) * G.Volume<F2>(k_half + ktemp, j + offset, i);
}
const Real B_start = B_Uf(F3, 0, k_start, j_c, i) * G.Volume<F3>(k_start, j_c, i);
const Real B_end = B_Uf(F3, 0, k_end, j_c, i) * G.Volume<F3>(k_end, j_c, i);
const Real B_center = (B_start + B_end + point_out * (c_right_v - c_left_v)) / 2.;

if (k == k_half) { // if at the center, then store the calculated value.
B_avg(F3, 0, k, j_c, i) = B_center;
} else if (k < k_half) { // interpolate between B_start and B_center
B_avg(F3, 0, k, j_c, i) = ((c_half - k_fine) * B_start + k_fine * B_center) / (c_half);
} else if (k > k_half) { // interpolate between B_end and B_center
B_avg(F3, 0, k, j_c, i) = ((k_fine - c_half) * B_end + (coarse_cell_len - k_fine) * B_center) / (c_half);
}
}
}
);

// F1 write
pmb->par_for("B_CT_derefine_poles_F1", bCC.ks, bCC.ke, j_p.s, j_p.e, bF1.is, bF1.ie,
KOKKOS_LAMBDA (const int &k, const int &j, const int &i) {
int j_c = j + ((binner) ? 0 : -1); // cell center
B_Uf(F1, 0, k, j_c, i) = B_avg(F1, 0, k, j_c, i) / G.Volume<F1>(k, j_c, i);
}
);
// F2 write
pmb->par_for("B_CT_derefine_poles_F2", bCC.ks, bCC.ke, j_p.s, j_p.e, bCC.is, bCC.ie,
KOKKOS_LAMBDA (const int &k, const int &j, const int &i) {
B_Uf(F2, 0, k, j, i) = B_avg(F2, 0, k, j, i) / G.Volume<F2>(k, j, i);
}
);
// F3 write
pmb->par_for("B_CT_derefine_poles_F3", bF3.ks, bF3.ke, j_p.s, j_p.e, bCC.is, bCC.ie,
KOKKOS_LAMBDA (const int &k, const int &j, const int &i) {
int j_c = j + ((binner) ? 0 : -1); // cell center
B_Uf(F3, 0, k, j_c, i) = B_avg(F3, 0, k, j_c, i) / G.Volume<F3>(k, j_c, i);
}
);

// Average the primitive vals to zone centers
const int ndim = rc->GetMeshPointer()->ndim;
auto B_U = rc->PackVariables(std::vector<std::string>{"cons.B"});
auto B_P = rc->PackVariables(std::vector<std::string>{"prims.B"});
pmb->par_for("UtoP_B_center", bCC.ks, bCC.ke, j_p.s, j_p.e, bCC.is, bCC.ie,
KOKKOS_LAMBDA (const int &k, const int &j, const int &i) {
int j_c = j + ((binner) ? 0 : -1); // cell center
B_P(V1, k, j_c, i) = (B_Uf(F1, 0, k, j_c, i) / G.gdet(Loci::face1, j_c, i)
+ B_Uf(F1, 0, k, j_c, i + 1) / G.gdet(Loci::face1, j_c, i + 1)) / 2;
B_P(V2, k, j_c, i) = (ndim > 1) ? (B_Uf(F2, 0, k, j_c, i) / G.gdet(Loci::face2, j_c, i)
+ B_Uf(F2, 0, k, j_c + 1, i) / G.gdet(Loci::face2, j_c + 1, i)) / 2
: B_Uf(F2, 0, k, j_c, i) / G.gdet(Loci::face2, j_c, i);
B_P(V3, k, j_c, i) = (ndim > 2) ? (B_Uf(F3, 0, k, j_c, i) / G.gdet(Loci::face3, j_c, i)
+ B_Uf(F3, 0, k + 1, j_c, i) / G.gdet(Loci::face3, j_c, i)) / 2
: B_Uf(F3, 0, k, j_c, i) / G.gdet(Loci::face3, j_c, i);
}
);
// Recover conserved B at centers
pmb->par_for("UtoP_B_centerPtoU", 0, NVEC-1, bCC.ks, bCC.ke, j_p.s, j_p.e, bCC.is, bCC.ie,
KOKKOS_LAMBDA (const int &v, const int &k, const int &j, const int &i) {
int j_c = j + ((binner) ? 0 : -1); // cell center
B_U(v, k, j_c, i) = B_P(v, k, j_c, i) * G.gdet(Loci::center, j_c, i);
}
);
}
}
}
return TaskStatus::complete;
}

double B_CT::MaxDivB(MeshData<Real> *md)
{
auto pmesh = md->GetMeshPointer();
Expand Down
5 changes: 5 additions & 0 deletions kharma/b_ct/b_ct.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,11 @@ TaskStatus CalculateEMF(MeshData<Real> *md);
*/
TaskStatus AddSource(MeshData<Real> *md, MeshData<Real> *mdudt, IndexDomain domain);

/**
* Derefine face-centered B at poles without introducing divergence
*/
TaskStatus DerefinePoles(MeshData<Real> *md);

/**
* Calculate maximum corner-centered divergence of magnetic field,
* to check it is being preserved ~=0
Expand Down
7 changes: 7 additions & 0 deletions kharma/driver/imex_step.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@
#include "b_flux_ct.hpp"
#include "electrons.hpp"
#include "inverter.hpp"
#include "ismr.hpp"
#include "grmhd.hpp"
#include "wind.hpp"
// Other headers
Expand Down Expand Up @@ -314,6 +315,12 @@ TaskCollection KHARMADriver::MakeImExTaskCollection(BlockList_t &blocks, int sta
auto t_ptou = tl.AddTask(t_heat_electrons, Flux::MeshPtoU, md_sub_step_final.get(), IndexDomain::entire, false);

auto t_step_done = t_ptou;
if (pkgs.count("ISMR")) {
auto t_derefine_b = t_ptou;
if (pkgs.count("B_CT"))
t_derefine_b = tl.AddTask(t_ptou, B_CT::DerefinePoles, md_sub_step_final.get());
t_step_done = tl.AddTask(t_derefine_b, ISMR::DerefinePoles, md_sub_step_final.get());
}

// Estimate next time step based on ctop
if (stage == integrator->nstages) {
Expand Down
11 changes: 11 additions & 0 deletions kharma/driver/kharma_step.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@
#include "electrons.hpp"
#include "grmhd.hpp"
#include "inverter.hpp"
#include "ismr.hpp"
#include "wind.hpp"
// Other headers
#include "boundaries.hpp"
Expand Down Expand Up @@ -273,6 +274,16 @@ TaskCollection KHARMADriver::MakeDefaultTaskCollection(BlockList_t &blocks, int
auto t_ptou = tl.AddTask(t_heat_electrons, Flux::MeshPtoU, md_sub_step_final.get(), IndexDomain::entire, false);

auto t_step_done = t_ptou;
if (pkgs.count("ISMR")) {
if (pkgs.at("ISMR")->Param<uint>("nlevels") > 0) {
auto t_derefine_b = t_ptou;
if (pkgs.count("B_CT"))
t_derefine_b = tl.AddTask(t_ptou, B_CT::DerefinePoles, md_sub_step_final.get());
auto t_derefine_f = tl.AddTask(t_derefine_b, ISMR::DerefinePoles, md_sub_step_final.get());
auto t_floors_2 = tl.AddTask(t_derefine_f, Packages::MeshApplyFloors, md_sub_step_final.get(), IndexDomain::entire);
t_step_done = tl.AddTask(t_floors_2, Inverter::MeshFixUtoP, md_sub_step_final.get());
}
}

// Estimate next time step based on ctop
if (stage == integrator->nstages) {
Expand Down
9 changes: 8 additions & 1 deletion kharma/flux/get_flux.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,9 @@ inline TaskStatus GetFlux(MeshData<Real> *md)
const bool use_b_cd = packages.AllPackages().count("B_CD");
const double ctop_max = (use_b_cd) ? packages.Get("B_CD")->Param<Real>("ctop_max_last") : 0.0;

const bool use_ismr = packages.AllPackages().count("ISMR");
const int ng_plus_nlevels = use_ismr ? packages.Get("ISMR")->Param<uint>("nlevels") + Globals::nghost : 0;

const EMHD::EMHD_parameters& emhd_params = EMHD::GetEMHDParameters(packages);

const Loci loc = loc_of(dir);
Expand Down Expand Up @@ -162,7 +165,11 @@ inline TaskStatus GetFlux(MeshData<Real> *md)
// We template on reconstruction type to avoid a big switch statement here.
// Instead, a version of GetFlux() is generated separately for each reconstruction/direction pair.
// See reconstruction.hpp for all the implementations.
KReconstruction::ReconstructRow<Recon, dir>(member, P_all(bl), k, j, b.is, b.ie, Pl_s, Pr_s);
if (use_ismr) {
KReconstruction::ReconstructRowIsmr<Recon, dir>(member, P_all(bl), k, j, b.is, b.ie, ng_plus_nlevels, Pl_s, Pr_s);
} else {
KReconstruction::ReconstructRow<Recon, dir>(member, P_all(bl), k, j, b.is, b.ie, Pl_s, Pr_s);
}

// Sync all threads in the team so that scratch memory is consistent
member.team_barrier();
Expand Down
20 changes: 20 additions & 0 deletions kharma/flux/reconstruction.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -616,6 +616,26 @@ KOKKOS_FORCEINLINE_FUNCTION void ReconstructRow(parthenon::team_mbr_t& member, c
}
}

// Reconstruct with ismr:
// Linear X3 reconstruction near X2 boundaries, but otherwise call through
// TODO higher-order with spacing of the coarse cells? Would need new ReconstructXN+no DC/VL support
template <Type recon_type, int dir>
KOKKOS_INLINE_FUNCTION void ReconstructRowIsmr(parthenon::team_mbr_t& member, const VariablePack<Real> &P,
const int& k, const int& j, const int& is_l, const int& ie_l, const int& ng_plus_nlevels,
ScratchPad2D<Real> ql, ScratchPad2D<Real> qr)
{
if constexpr (dir == X3DIR) {
if (j < ng_plus_nlevels || j > P.GetDim(2) - 1 - ng_plus_nlevels) {
KReconstruction::ReconstructX3l<Type::linear_mc>(member, k - 1, j, is_l, ie_l, P, ql);
KReconstruction::ReconstructX3r<Type::linear_mc>(member, k, j, is_l, ie_l, P, qr);
} else {
ReconstructRow<recon_type, dir>(member, P, k, j, is_l, ie_l, ql, qr);
}
} else {
ReconstructRow<recon_type, dir>(member, P, k, j, is_l, ie_l, ql, qr);
}
}

// Donor cell: Parthenon already implemented the row versions, so we call through.
template <>
KOKKOS_FORCEINLINE_FUNCTION void ReconstructRow<Type::donor_cell, X1DIR>(parthenon::team_mbr_t& member,
Expand Down
Loading