-
Notifications
You must be signed in to change notification settings - Fork 16
Packages
A brief intro to packages in KHARMA is provided here; if you're interested in the details, we expound on KHARMA's packages here.
Packages are Parthenon's way of introducing modularity in the code in the way of physics and features. Each package has its own namespace within which we define its local variables, global constants (via the params
object, an instance of parthenon::Params
), its functions, and its tasks.
Unless marked private
, packages can access variables declared in other packages. The ability to share variables eliminates the need to save multiple copies of the same variable. Probably the best example of this is the fluid adiabatic index gamma
, a constant declared in the GRMHD
package, which can be pulled like,
auto gam = pmb->packages.Get("GRMHD")->Param<Real>("gamma");
and is done so all across the code. Every package has an Initialize
method declared in its namespace where its variables and constants are registered. The ProcessPackages
function defined in the KHARMA
namespace loads the required packages during startup. As explained in the section on the ImEx driver, the Implicit
package is loaded by default if the driver is ImEx. Additionally, KHARMA initializes the GRMHD
, Globals
and floors
package by default. The remainder of the packages are initialized only if specified during runtime. If you've written a new package, you must add its Initialize method to the packages
object here.
In additional to global constants that are available through the params
object, packages can also include variables that are defined over the entire mesh (also called fields). These could be scalar, vectors or even tensor objects. The variables contain two kinds of information - metadata and data. The metadata contains all the information about the variable, for eg. its name, size, datatype and any other additional info about the variable that facilitates accessing and manipulating the data. The metadata is specified through a vector of metadata flags. For example have a look at the following definition of a field in the Implicit
package.
Metadata m_real = Metadata({Metadata::Real, Metadata::Cell, Metadata::Derived, Metadata::OneCopy});
pkg->AddField("solve_norm", m_real);
Here we have defined an object of Parthenon's Metadata
class called m_real
by calling Metadata
's constructor. We've provided a MetadataFlag
vector that specifies four flags, Metadata::Real
tells it that the field defined by this metadata will consist of Reals, Metadata::Cell
clarifies that array of Reals will reside at the cell centers, Metadata::Derived
explains that the data in this array can be computed on-the-fly from a more fundamental state of variables, and finally Metadata::OneCopy
tells Parthenon that any field defined with this metadata will be shared between stages (recall that we have a multistage driver). The Parthenon docs have a page that lists all the metadata flags available. The second line in the code excerpt above defines a field solve_norm
in the implicit package with m_real
as its metadata.
We now go through the packages in KHARMA.
The implicit solver is essentially a multidimensional Newton-Raphson scheme. In its current form it finds the root for the residual of the system of balance laws,
As is the case for the multidimensional Newton-Raphson scheme, the solver computes a Jacobian
NOTE: This is a local solve ie., it minimizes the residual and computes the update to the primitives on a zone-by-zone basis.
The details of the scheme are present in section 3 of this paper.
The Implicit
package defines a bunch of constants that are relevant to the implicit solve. These are,
-
min_nonlinear_iter
(default:1
) - The minimum number of iterations that must be performed by the solver. -
max_nonlinear_iter
(default:3
) - The maximum number of iterations that must be performed by the solver. -
rootfind_tol
(default:1.e-12
) - The tolerance that must be met during the solve to consider it a success. -
jacobian_delta
(default:4.e-8
) - The small change considered for each primitive -$\Delta P$ - to compute the Jacobian. -
use_qr
(default:true
) - A boolean that dictates whether the QR decomposition should be considered while solving the system of linear equations. Iffalse
, the solver uses LU decomposition. While QR has certain stability benefits, we've found that it can lead to potentially inaccurate results when performing a >= 8x8 solve on GPUs. -
linesearch
(default:true
) - A boolean that decides if backtracking linesearch must be performed.
The following arguments are relevant only if linesearch has been enabled.
-
max_linesearch_iter
(default:3
) - The maximum number of iterations that must be performed by the linesearch. -
linesearch_eps
(default:1.e-4
) - A parameter that determines if the rate of decrease of$\boldsymbol{R}$ is small enough to exit linesearch. -
linesearch_lambda
(default:1.0
) - The initial value of the step length.
Additionally, the Implicit
package defines to two scalar fields,
-
solve_norm
- The L2 norm of the residual. -
solve_fail
- It is possible that the solver may not converge to a physically viable solution in zones where the loss landscape is difficult to navigate. In these scenarios, we manually backtrack by setting the step length to 0.1, recording the failure by settingsolve_fail
to1
, and performing the linesearch. In the event the manual backtrack is not sufficient, we setsolve_fail
to2
and exit the solver (for that particular zone). Later, we consider the average of the primitives of the neighboring zones that did converge.
The package contains a single task Step
that performs the solve over the implicit variables. This task is called by the ImexDriver
after the explicit variables are updated. Since we are only concerned with evolving the implicit variables, we need to reorder the variables in relevant fluid states. We achieve this with the package's get_ordered_names
function. It does this by leveraging metadata flags. The function creates a vector of MetadataFlag
by picking variables marked with the ImplicitFlag
first and then those marked with ExplicitFlag
. We then pack the variables in this order in all fluid states via Parthenon's PackVariables
function, and finally generate a VarMap
to refer to the right variable index. VarMaps are explained in detail here.
The syntax for the rest of the task is much like KHARMA's GetFlux
. We first estimate the scratch memory required and launch a Kokkos kernel (aka. parallel dispatch). Inside the outer for loop (across blocks and along X3 and X2) we allocate temporaries using Parthenon's ScratchPad and initialize them using the corresponding MeshBlockPack
. In the inner for loop (along X1) we define Kokkos subviews - slices over the scratchpads - that are lightweight and faster. The remainder of the computational body carries out the necessary operations needed to update the primitives,
- Compute and save implicit source terms. Currently, we only include source terms for the EMHD variables, but one can just as easily include function calls for other implicit sources.
- Compute the Jacobian.
- Solve the system of equations (refer this) for the update
$\boldsymbol{\delta P}$ . - Perform linesearch to estimate the right step length.
- Update the primitives, calculate the residual and its L2 norm.
Additionally, the package has two device side functions calc_residual
and calc_jacobian
that are defined in the package's header file and as the name implies compute the residual and jacobian respectively.
DISCLAIMER: This package is in a developmental phase.
The EMHD package includes anisotropic heat conduction and momentum transport. The theory is detailed in this work and in the theory subsection we present just the governing equations with variable definitions for brevity.
The evolution equations for the extended magnetohydrodynamics model are,
$$ \nabla_{\mu}(\tilde{q}u^{\mu}) = -\frac{\tilde{q} - \tilde{q}{0}}{\tau_R} + \frac{\tilde{q}}{2}\nabla{\mu}u^{\mu},$$
$$ \nabla_{\mu}(\Delta\tilde{P}u^{\mu}) = -\frac{\Delta\tilde{P} - \Delta\tilde{P}{0}}{\tau_R} + \frac{\Delta\tilde{P}}{2}\nabla{\mu}u^{\mu}, $$
where the first three equations are similar to those for ideal magnetohydrodynamics (see here), and the latter two dictate the evolution of the new non-ideal variables - conductive heat flux scalar
This is done to increase the stability of the numerical implementation of these equations in regions of low density.
Note that the stress-energy tensor
The magnetic field variables are updated explicitly prior to the implicit solver. Therefore, a full non-ideal run contains 7 variables to be updated implicitly and it utilizes the newly updated B-field.
We now elucidate how the various source terms are computed ie., explicitly or implicitly. The source terms for the internal energy and velocity field are evaluated explicitly in Flux::AddSource
prior to the solve. Since the evolution equations for the rest-mass density and magnetic fields contain no source terms we are done calculating all the source terms for the ideal MHD sector. For the two remaining equations corresponding to the non-ideal equations we selectively evaluate some explicitly and some implicitly. Let's see the source terms for
$$ S = -\frac{\tilde{q} - \tilde{q}{0}}{\tau_R} + \frac{\tilde{q}}{2}\nabla{\mu}u^{\mu}.$$
What is worth noting is that
$$ S \sim \textcolor{blue}{\frac{\tilde{q}}{\tau_R}} + \bigg(\textcolor{blue}{\frac{\partial_{}t\Theta}{\tau_R}} + \frac{\partial_{i}\Theta}{\tau_R}\bigg) + \bigg(\textcolor{blue}{u^{t}\partial_{t} u_{\mu}} + u^{i}\partial_{i}u_{\mu} + \Gamma_{\mu\nu\kappa}u^{\nu}u^{\kappa}\bigg) + \tilde{q}\bigg(\textcolor{blue}{\partial_{t}u^{t}} + \partial_{i}u^{i} + g^{\mu\nu}\Gamma^{\kappa}{\mu\nu}u{\kappa}\bigg).$$
The terms in blue are treated implicitly and the rest are computed explicitly prior to the solve. The first term on the RHS has a factor of EMHD::implicit_sources
). The remainder of the implicit terms contain a time derivative and NEED to be calculated within the solver as the fluid state is iteratively updated (EMHD::time_derivative_sources
). The terms in black are computationally expensive and do not warrant an implicit update; they are therefore evaluated prior to the solve via EMHD::AddSource
and passed as arguments to Implicit::Step
.
Many of the constants in the EMHD
package have to do with the closure scheme used to compute the transfer coefficients. We express this as,
where conduction_alpha
and viscosity_alpha
respectively.
-
conduction_alpha
(default:1.
) - The closure parameter for thermal diffusivity. -
viscosity_alpha
(default:1.
) - The closure parameter for kinematic viscosity. -
tau
(default:1.
) - Relaxation timescale. Must be provided for test problems. It is computed for the torus problems based on instability conditions. -
closure_type
(default:torus
) - The closure scheme to be used. This is a misnomer of sorts since ultimately we use the scheme mentioned above but a few additional operations may need to be performed either to compute$\tau_{R}$ or if$\kappa$ or$\eta$ are provided instead (for eg. in the conducting atmosphere and viscous Bondi problem). -
kappa
(default:1.
) - Thermal conductivity. -
eta
(default:1.
) - Dynamic viscosity. -
higher_order_terms
(default:false
) - Should second order terms be considered? -
feedback
(default:true
) - Should the EMHD sector feedback on to the ideal MHD variables? We generally want this enabled to get the physics right but there may be cases where one might want to disable it (for eg., the viscous Bondi problem or for debugging purposes).
With the introduction of two new variables, the EMHD
package has four new scalar fields,
-
prims.q
-$\tilde{q}$ -
prims.dP
-$\Delta\tilde{P}$ -
cons.q
-$\sqrt{-g}\tilde{q}u^{t}$ -
cons.dP
-$\sqrt{-g}\Delta\tilde{P}u^{t}$
The EMHD::AddSource
task is called by the ImEx driver prior to an implicit update, it computes all the explicit source terms associated with the evolution equations of the non-ideal variables. These source terms contain temperature and velocity gradients (cf. the decomposition of source terms as implicit/explicit above). This means we need to reconstruct these variables at the zone faces to obtain a second order slope value to zone centers. To facilitate this, the EMHD package has a bunch of auxiliary functions present in emhd_utils.hpp
.
KHARMA does not cache scalar fields such as set_parameters
function in the header file computes the transport coefficients based on the closure scheme, while the convert_prims_to_q_dP
function calculates calc_tensor
that is called when source terms or fluxes need to be estimated.