Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eigenmode Simulation using Gmsh Mesh File is not Converging #285

Open
DavidSomm opened this issue Oct 12, 2024 · 4 comments
Open

Eigenmode Simulation using Gmsh Mesh File is not Converging #285

DavidSomm opened this issue Oct 12, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@DavidSomm
Copy link

@hughcars

First off, just wanted to say thanks for the great work you guys are doing with PALACE!

I am running an eigenmode simulation of a coplanar waveguide (cpw) resonator capacitively coupled to a feed line. I've posted an image of the design generated in gmsh below.

I have used gmsh to create the mesh but am unable to accurately find the resonant frequency of the cpw resonator by increasing the solver order, or by increasing the mesh refinement level. Currently, I can run the simulation with a solver order of 1 with zero mesh refinement and PALACE will return a resonant frequency (7.26GHz), but it’s quite a bit off what it should be (~8.06GHz). However, when I try to improve the accuracy of the simulation by increasing the solver order or the level of mesh refinement the solver no longer converges to the set tolerance (1e-8).

I have used COMSOL to create the exact same geometry and then created a .mphbin mesh file to use with PALACE and did not have any issues getting accurate eigenmode simulations with solver order 2 (using the same config file as for the gmsh mesh).

I believe there may be an issue with the gmsh mesh file but cannot figure out why I can't get the simulation converging for solver orders greater than 1. However, I did notice that in the output file of the simulation that when I increase the solver order, the initial residual norm is quite large, say 9e+13, compared to 1e+0 for a solver order of 1. Consequently, the residual norm doesn't seem to approach the tolerance level (1e-8).

For your reference, I have attached the gmsh file in a txt file (could not upload the .msh file), the config json file and the output file for the eigenmode simulation. I am running PALACE v0.11.1-27-g1631367.

Any help/guidance you could provide would be really appreciated.

Mesh File
GMSH_eigen_test_NEW.txt

Config File
GMSH_eigen_test_NEW.json

Output File
solver_order_2.txt

image
@DavidSomm DavidSomm added the bug Something isn't working label Oct 12, 2024
@hughcars
Copy link
Collaborator

Hi @DavidSomm,

I just attempted to recreate this issue on the current tip of main, and am not seeing any issues when running with "Order": 2 and 64 processors, the returned modes are:

m     Re{ω}/2π (GHz)     Im{ω}/2π (GHz)        Bkwd. Error         Abs. Error
=============================================================================
1      +7.952405e+00      +1.865476e-03      +4.741139e-11      +2.526620e-05
 Wrote mode 1 to disk
2      +9.984936e+00      +5.644482e-01      +2.356790e-10      +1.255981e-04
 Wrote mode 2 to disk

and the linear solver starts from a reasonable value:

Assembling multigrid hierarchy:
 Level 0 (p = 1): 422418 unknowns, 7054226 NNZ
 Level 1 (p = 2): 2285722 unknowns
 Level 0 (auxiliary) (p = 1): 60204 unknowns, 982200 NNZ
 Level 1 (auxiliary) (p = 2): 482622 unknowns

#PETSc Option Table entries:
-eps_monitor # (source: code)
#End of PETSc Option Table entries

  Residual norms for GMRES solve
  0 (restart 0) KSP residual norm 1.605541e+00
  1 (restart 0) KSP residual norm 1.087343e+00
  2 (restart 0) KSP residual norm 8.154069e-01

The version of Palace you're using (0.11) is pretty old at this stage, and there have been many bug fixes since then, both within Palace and the dependencies, in particular MFEM is a very fast moving target for fixes. It is plausible that you are running into one that has already been fixed. I would suggest that you update to a newer Palace build and then see if your issue remains.

@DavidSomm
Copy link
Author

@hughcars

Your suggestion worked. We were able to build Palace v0.12 on one of our PCs and were able to produce accurate results by increasing the solver order. Thanks a lot!

However, I am trying to build Palace v0.12 on my university's HPC and am running into issues (I was previously able to build an older version of PALACE). When running an interactive session on the HPC and following the build from source instructions I had the following error, which appears to have resulted from running the SLEPc test:

slepc_petsc_fail2

Seeming as though there was an issue mpi/pmix_v4, I decided to run the interactive session with four threads and run the make with these threads (make -j 4). The error didn't appear again however the build process did not complete and was left hanging until my session timed out (1 hour) see screenshot below.

build_hanging

I'm running the build on an AMD epyc3 node, the HPC operating system is Rocky8 and SLURM runs as the scheduling system.

Any help to resolve this issue would be great.

@DavidSomm
Copy link
Author

@hughcars I managed to figure out the issue with the build. I could post what I did to solve the issue, but I think maybe the problem is specific to my university's HPC and may not be helpful for others. Happy to write it up in any case if you think it's a good idea.

@hughcars
Copy link
Collaborator

hughcars commented Nov 6, 2024

Glad to hear, feel free to post, you never know when someone might be googling for something that matches up with it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants