-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Energy density t-moves related crash in DMC #5245
Comments
Quick follow-up: Interestingly, switching non-local moves from v3 (used in all the reported crashes) to 'no' made the crash go away. 'v0' restores the crash => there is an issue when t-moves are used with the energy density. |
If this is not known to work in legacy and/or is not immediately needed, having the energy density work only for locality approximation could be listed as a "known limitation", i.e. the current issue is only a bug in that we claim it is supported when it does not. @PDoakORNL @jtkrogel What do we know of the status of energy density in legacy with different locality schemes, if anything, and what is needed in the immediate future? |
Via hand built vmc-dmc input with offload on nvidia I don't have this crash, I'm still looking at it. |
I suggest a regular CPU build. |
I can reproduce it with CPU build. Looking at it in the debugger now. |
Working on the fix now, updates to many QMCHamiltonian potentials such as LocalECPotential, CoulombPotential will be necessary. |
Describe the bug
Attempts to run the example in #5214 are unsuccessful. In a pure CPU build (GCC14, real, MPI), I get a reliable SEGV after a few blocks of DMC when the energy density estimator is enabled. I could not get runs without the energy density estimator to crash, including runs with no estimators. I also tried putting many small VMC sections ahead of the DMC section, but could not get a crash in VMC, only the DMC. Crashes were obtained with 16xMPI 1 thread each, 4xMPI 4 threads each, 1x MPI 16 threads, and 1 x MPI 1 thread.
To Reproduce
Modify qmcpack/nexus/examples/qmcpack/rsqmc_misc/estimators/iron_ldaU_dmc.py to run calculations by setting generate only = 0; run. This needs both QE and QMCPACK. Starting from scratch, the first crash takes O(1h). The actual DMC crash can be rigged to occurs within minutes.
I can provide just the generated inputs including jastrow & orbital files if preferred.
Unhelpful error:
Typical output:
Expected behavior
No crash
System:
nitrogen2, nightly "gcc new mpi" configuration with GCC 14.2.0, OpenMPI etc.
The text was updated successfully, but these errors were encountered: