You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After I was able to run the coupled solver from RheoEFoam's examples, I used the sparse matrix solvers library to solve coupled equations in Roghair's solver (https://github.com/iroghair/openFoamEHD). This solver uses interFoam as the base and solves the leaky dielectric model. I ported the solver to OpenFOAM 9 and verified it was working with the validation cases. I was able to solve all the validation cases with the coupled equations to similar error, and even achieve a speedup with the coupled solvers compared to using Roghair's original segregated solver.
Next, I tried to solve my full-scale case with the coupled equations. This has moderately complex geometry and a mesh of about 2.5 million cells across 108 processors. I selected GMRES for the iterative method and Hypre's BoomerAMG for the preconditioner. I tried a variety of other solvers, including a direct LU solve with MUMPS and using BiCGStab+LU as you described in your paper: https://www.sciencedirect.com/science/article/pii/S0045793019302427. So far GMRES+BoomerAMG has been working the best for me, but it still requires 600-5000 iterations to converge. I tested GMRES+BoomerAMG on the validation cases in serial and up to 64 processors; it converges within 5 iterations in both serial and parallel.
I sent the matrix and RHS to a PETSc developer and he said about the matrix:
"The system does not seem to follow a parallel numbering, with a data decomposition in 108 processes.
Instead, it seems you are using the natural numbering.
Prior to the solve, if I renumber the system, e.g., with ParMETIS, then I get a really fast convergence."
I set the -info option and I can see PETSc is aware of 108 processors, and I run the solver with mpirun -np 108 interFoamEHD -parallel. For the validation case on 64 procs, all the processor directories are created (e.g. processor63 etc) and the info command says PETSc started with 64 processors.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
After I was able to run the coupled solver from RheoEFoam's examples, I used the sparse matrix solvers library to solve coupled equations in Roghair's solver (https://github.com/iroghair/openFoamEHD). This solver uses interFoam as the base and solves the leaky dielectric model. I ported the solver to OpenFOAM 9 and verified it was working with the validation cases. I was able to solve all the validation cases with the coupled equations to similar error, and even achieve a speedup with the coupled solvers compared to using Roghair's original segregated solver.
Next, I tried to solve my full-scale case with the coupled equations. This has moderately complex geometry and a mesh of about 2.5 million cells across 108 processors. I selected GMRES for the iterative method and Hypre's BoomerAMG for the preconditioner. I tried a variety of other solvers, including a direct LU solve with MUMPS and using BiCGStab+LU as you described in your paper: https://www.sciencedirect.com/science/article/pii/S0045793019302427. So far GMRES+BoomerAMG has been working the best for me, but it still requires 600-5000 iterations to converge. I tested GMRES+BoomerAMG on the validation cases in serial and up to 64 processors; it converges within 5 iterations in both serial and parallel.
I sent the matrix and RHS to a PETSc developer and he said about the matrix:
"The system does not seem to follow a parallel numbering, with a data decomposition in 108 processes.
Instead, it seems you are using the natural numbering.
Prior to the solve, if I renumber the system, e.g., with ParMETIS, then I get a really fast convergence."
I looked through the code in https://github.com/fppimenta/rheoTool/tree/master/of90/src/libs/sparseMatrixSolvers/coupled and didn't notice anything obviously wrong, but I am not super familiar with PETSc. Do you know anything about the numbering and why it would not be following a parallel scheme?
I set the
-info
option and I can see PETSc is aware of 108 processors, and I run the solver withmpirun -np 108 interFoamEHD -parallel
. For the validation case on 64 procs, all the processor directories are created (e.g. processor63 etc) and the info command says PETSc started with 64 processors.Thank you!
Beta Was this translation helpful? Give feedback.
All reactions