Skip to content

Self Gravity with Multigrid

Kengo TOMIDA edited this page Mar 2, 2024 · 22 revisions

In this page, practical usage of the Multigrid self-gravity module is explained. For the details of the Multigrid algorithm, please see Multigrid. Also, for the other gravity solver using FFT, see Self-Gravity with FFT.

Configuration and compilation

To enable the Multigrid self-gravity sovler, simply configure the code with the --grav mg option. In the pgen directory, two sample codes are provided: poisson.cpp is a simple benchmark test using sinusoidal waves, and jeans.cpp is a linear wave with self-gravity. These can be run from the regression test script. In addition, collapse.cpp is a more practical example of protostellar collapse. Corresponding input files are provided in inputs/hydro or inputs/mhd.

To configure and compile the code with poisson.cpp, use the following command:

> python configure.py --grav mg --prob poisson
> make clean ; make

Optionally you can enable MPI parallelization with -mpi. OpenMP parallelization is also implemented but it is not well tested at this moment (and I would appreciate if someone can help me and test it).

Setting and input parameters

For simulations with self-gravity, the gravitational constant G or 4πG must be set in the problem generator file (e.g. in Mesh::InitMeshData) using the Mesh::SetGravityConstant or Mesh::SetFourPiG function. The value must be correctly scaled with the units of other physical quantities. This design enables a user to change the constant later during the simulation, for example, to set up the initial condition without gravity and then turns it on later. In poisson.cpp, this value is read from <problem> four_pi_G in the input file.

The parameters for the gravity solver are set in the <gravity> block.

<gravity>
mgmode          = FMG       # set the solver mode: FMG or MGI
threshold       = 0.0       # set the convergence threshold
niteration      = 10        # set the number of iterations
fas             = false     # Full Approximation Scheme
omega           = 1.15      # Acceleration parameter in GS smoother
npresmooth      = 1         # Number of pre-smoothing (0,1,2)
npostsmooth     = 1         # Number of post-smoothing (0,1,2)
output_defect   = true      # output the defect
show_defect     = false     # show the defect at every timestep
fill_ghost      = true      # fill all ghost cells
ix1_bc          = periodic  # Inner X1 boundary condition
ox1_bc          = periodic  # Outer X1 boundary condition
ix2_bc          = periodic  # Inner X2 boundary condition
ox2_bc          = periodic  # Outer X2 boundary condition
ix3_bc          = periodic  # Inner X3 boundary condition
ox3_bc          = periodic  # Outer X3 boundary condition
mporder         = 4         # order of multipole expansion, 2 or 4
auto_mporigin   = true      # automatically set the origin of multipole expansion
mporigin_x1     = 0.0       # X1 position of the origin of multipole expansion
mporigin_x2     = 0.0       # X2 position of the origin of multipole expansion
mporigin_x3     = 0.0       # X3 position of the origin of multipole expansion
nodipole        = false     # set true to set dipole to zero

Solver parameters

The mgmode parameter specifies the solver mode (see Multigrid), where FMG is the full multigrid algorithm and MGI is the standard V-cycle iterations. FMG is generally recommended for both robustness and efficiency. threshold controls the accuracy of the solution and when the iterations stop. This value must be carefully chosen according to each problem. While threshold = 0.0 makes the code iterate until it reaches saturation, it is not always necessary. On the other hand, the code stops after the first FMG sweep if a large threshold value is set. This may work for some problems, but high-frequency noises arising from the red-black iteration pattern may become visible, particularly in long-term simulations. So it is recommended to set a reasonable value according to your simulation setup. Alternatively, you can set niteration to apply the fixed number of V-cycle iterations. You may need to experimentally determine the appropriate threshold or number of iterations to achieve required accuracy for your simulations. When both threshold and niteration are specified, only threshold is used.

By default, the code uses the V(1,1) cycle (V-cycle with one pre-smoothing and one post-smoothing) and omega=1.15. You may change these parameters, but for the self-gravity these parameters are usually satisfactory. npresmooth and npostsmooth can be 0,1,2. For other physical processes, the optimal values vary (see Cosmic‐ray diffusion with Multigrid for example).

As explained in Multigrid, the MeshBlock size must be power of 2, i.e. the <meshblock> nx? parameters must be 2n. There is no other restriction regarding the Mesh structure and parallelization. The code works with mesh refinement without any additional setting.

Output

The code outputs the gravitational potential as phi when prim, cons or phi is specified in an <output> block. In addition, when output_defect = true is set, the defect is also dumped in the file as defect-phi. When show_defect = true, the total defect is printed at every timestep.

The current implementation of the Multigrid solver uses one ghost cell on each side of a MeshBlock. When ghost cell data are written in output files, only the first ghost cells have physical values and "extra" ghost cells beyond them are filled with default values (i.e. zero). However, it is sometimes needed to fill all the ghost cells for analysis or for additional physics. The extra ghost cells can be filled by setting fill_ghost = true. It should be noted, however, the extra ghost cells are not fully consisntent with the first ghost cells when mesh refinement is in use, because the mass-conservation formula is applicable only for the first ghost cells, and simple linear interpolation is used to fill the extra cells. This process takes small additional cost because it requires additional communications between MeshBlocks but only on the finest level at the end of iterations.

Boundary conditions

The ix?_bc and ox?_bc parameters specify the boundary conditions, similar to the boundary conditions for hydrodynamics. Currently, periodic, zerograd, zerofixed, multipole and user are supported. Note that, although the zerograd and zerofixed conditions are implemented, these are very naive boundary conditions and they do not necessarily behave as expected.

The multipole boundary condition uses multipole expansion to realize isolated (also called "free" or "vacuum") boundary conditions, i.e. there is no other mass outside the box and the potential approaches zero at infinity. To use this, the mporder parameter must be set. Currently, mporder = 2 (upto quadrapole) and mporder = 4 (upto hexadecapole) are implemented. To achieve better accuracy of the expansion, it is recommended to have a large box and put boundaries away from the region of interest as far as possible. Use of an appropriate source mask function can be also helpful (see below).

With auto_mporigin = true, which is now default, the center of mass is computed and used as the origin of the multipole expansion. Alternatively, one can specify the origin of the multipole expansion using the mporigin_x? parameters and auto_mporigin = false. Unless the system is highly symmetric and the center of mass stays at the same position, it is recommended to use auto_mporigin = true.

If the system is truly symmetric and there is no dipole moment, the center of mass should not move. However, the multipole calculation numerically produces tiny but non-zero dipole moment. In long-term and high-resolution simulations, this effect can be visible. When the nodipole parameter is true (false by default), the code ignores the dipole moment in the multipole boundary conditions. This helps keeping the center of mass. Obviously, one should not use this feature when there is physically non-zero dipole in the system. Also, one has to explicitly set auto_mporigin = false and the mporigin_x? parameters to use the nodipole option because the center of mass is calculated from the dipole moment.

Users can define user-defined boundary conditions, but it requires some consideration (see also Multigrid). When none of the boundary conditions provides a fixed-point, the subtrace-average option must be eneabled. Also, consistent boundary conditions must be defined for all the multigrid levels. When the basic Multigrid scheme is in use, zero-fixed boundary conditions for the residual equation should be used on all the coarser levels, but currently the code does not handle this automatically. Instead, one can use FAS to apply the physical boundary conditions on all the levels. To enable FAS, set fas = true.

Source mask function

In some simulations, particularly whith the multipole boundary condition, the mass near boundaries can affect the solution. For example, to simulate a spherical cloud in a Cartesian box, the gas mass near the corners of the simulation box can produce higher order multipole moments. This is not desirable in some simulations. To suppress this, one can manipulate the source (density) distribution using a source mask function. To use this feature, define the mask function and enroll it in a problem generator file as follows:

void srcmask(AthenaArray<Real> &src, int is, int ie, int js, int je,
             int ks, int ke, const MGCoordinates &coord) {
  constexpr Real maskr = 1.0;
  for (int k=ks; k<=ke; ++k) {
    Real z = coord.x3v(k);
    for (int j=js; j<=je; ++j) {
      Real y = coord.x2v(j);
      for (int i=is; i<=ie; ++i) {
        Real x = coord.x1v(i);
        Real r = std::sqrt(x*x + y*y + z*z);
        if (r > maskr)
          src(k, j, i) = 0.0;
      }
    }
  }
  return;
}

void Mesh::InitUserMeshData(ParameterInput *pin) {
  EnrollUserMGGravitySourceMaskFunction(srcmask);
}

The mask is only applied for the gravity calculation, and it does not affect the hydrodynamics and other parts. Note that the MGCoordinates class is a subset of the Coordinates class containing only cell-centered (x?v) and face-centered positions (x?f).

Accessing the potential

To access the gravitational potential in the code, it is stored in the Gravity class, which is a member of MeshBlock. For example, from MeshBlock::UserWorkInLoop, it can be accessed as follows:

void MeshBlock::UserWorkInLoop() {
  for (int k = ks; k <= ke; ++k) {
    for (int j = js; j <= je; ++j) {
      for (int i = is; i <= ie; ++i) {
        Real potc = pgrav->phi(k,j,i);
        Real potl = pgrav->phi(k,j,i-1);
        Real potr = pgrav->phi(k,j,i+1);
        ...
      }
    }
  }
  return;
}

Performance and scalability

Multigrid scaling

This figure is a bit outdated as it is measured on the Perseus cluster at Princeton equipped with Broadwell CPUs, but it illustrates the scaling behavior of the Multigrid solver quite well. This is the results of a weak scaling test of the Multigrid Poisson solver on a uniform grid with periodic boundaries using different MeshBlock sizes. The FMG mode is used, where only one FMG sweep and no additional V-cycle iteration is performed (i.e. the fastest but least accurate configuration). Different lines indicate different MeshBlocks assigned to each process. The red nearly horizontal line is the weak scaling of the MHD part (VL2 + PLM + HLLD, 643 cells per process). With 643 cells per MeshBlock, the Multigrid solver (cyan) is about 10 times faster than MHD even with more than a few thousand processes. While it takes two Multigrid sweeps per step and probably we need a few additional V-cycle iterations to improve the accuracy, we still can say the Multigrid solver costs less than the MHD solver. Note that the initial decline with less than 28 processes is due to the memory bandwidth limit and frequency throttling within one node, and it is due to the hardware rather than the software.

Note that the black line in the figure is FFT. FFT does not scale very well with many processes, but this is nothing wrong with the FFT solver. This is simply because the computational cost of FFT scales as O(N logN).

Source term approach

Updates to momenta and total energy due to self-gravity are added via source terms, not via the divergence of a gravitational stress tensor or gravitational energy "flux", respectively. Source terms are added following the prescription detailed in MHG2020, with the exception that the energy source term does not use the time-averaged gravitational potential. With this simplification, total energy is not conserved to round-off error. Momentum conservation is determined by the accuracy of the gravitational potential.

Clone this wiki locally