-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Surface forces #488
base: master
Are you sure you want to change the base?
Surface forces #488
Conversation
Hi @robertodr @ilfreddy |
I previously tested GGA calculations only for H2 where the forces where the forces were correct. In more complicated systems this is not the case. (LDA is always correct). I will look into it and let you know once I have solved the problem. |
The GGA bug is fixed now, I forgot to add the divergence terms in the xc potential for the GGA case. They have been added now. |
n1 = nablaPhi[iOrb][0].real().evalf(pos); | ||
n2 = nablaPhi[iOrb][1].real().evalf(pos); | ||
n3 = nablaPhi[iOrb][2].real().evalf(pos); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gitpeterwind These are the lines that cause the segfault
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In an OrbitalVector, not all orbitals are directly available to all MPI processes. You should do something like:
if (mrcpp::mpi::my_orb(phi_i)) {
pos[0] = gridPos(i, 0);
pos[1] = gridPos(i, 1);
pos[2] = gridPos(i, 2);
n1 = nablaPhi[iOrb][0].real().evalf(pos);
n2 = nablaPhi[iOrb][1].real().evalf(pos);
n3 = nablaPhi[iOrb][2].real().evalf(pos);
stress[i](0, 0) -= occ * n1 * n1;
stress[i](1, 1) -= occ * n2 * n2;
stress[i](2, 2) -= occ * n3 * n3;
stress[i](0, 1) -= occ * n1 * n2;
stress[i](0, 2) -= occ * n1 * n3;
stress[i](1, 2) -= occ * n2 * n3;
}
}
}
And then use mrcpp::mpi::allreduce_vector to collect all the results. It is a bit difficult with the way you have defined the stress vector. But if you cannot easily define it as a DoubleVector, or DoubleMatrix , I can try to give a detailed way to do it (one way is simply to copy all the values into a DoubleVector, the do the allreduce operation, and copy back).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense, thanks for your quick answer. I will use arrays for the stress tensor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gitpeterwind I think it might be a bit more complicated, I need
nablaPhi[iOrb][0]
nablaPhi[iOrb][1]
nablaPhi[iOrb][2]
all on the same rank. How can I make sure that this is the case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is correct: the last [0],[1],[2] picks out one OrbitalVector, and iOrb only determines the rank. Or did I misunderstood?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, nablaPhi[0][0] contains the x component of the gradient of the zeroth orbital. nablaPhi[0] is an orbital vector that contains the x, y and z component of the gradient of orbital 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding the following print to the loop
for (int i = 0; i < Phi.size(); i++) {
nablaPhi.push_back(nabla(Phi[i]));
nablaPhi[i].distribute();
if (mrcpp::wrk_rank == 0){
std::cout << "gradient of orbital " << i << " stored on rank: x: " << nablaPhi[i][0].getRank() << " y: " << nablaPhi[i][1].getRank() << " z: " << nablaPhi[i][2].getRank() << std::endl;
}
}
results in:
gradient of orbital 0 stored on rank: x: 0 y: 1 z: 2
gradient of orbital 1 stored on rank: x: 0 y: 1 z: 2
gradient of orbital 2 stored on rank: x: 0 y: 1 z: 2
gradient of orbital 3 stored on rank: x: 0 y: 1 z: 2
but I need all components of the gradient on the same rank
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The OrbitalVector class expects some properties, like that the size is the "usual" number of orbitals, and that each MPI process only access its own indices. If a vector has only have 3 orbitals, you should not define it as an OrbitalVector (and "distribute them is meaningless). Can you simply define std::vector<std::vector<Orbital>> nablaPhi(3);
? or do you need some OrbitalVector specific properties? Still Phi[i] will only be defined on the MPI process with the right rank.
Anyway, if you use the if (mrcpp::mpi::my_orb(phi_i)) {
, do you still get the seg fault?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good idea. I thought that this wouldn't work because nabla(Phi) returns an orbitalvector. I tried your suggestion and it compiled. I will try to solve the problem like that, thanks for your help
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a bug in the master code. A PR has been sent
src/surface_forces/SurfaceForce.cpp
Outdated
std::vector<mrchem::OrbitalVector> nablaPhi; | ||
mrchem::OrbitalVector hessRho = hess(rho); | ||
for (int i = 0; i < Phi.size(); i++) { | ||
nablaPhi.push_back(nabla(Phi[i])); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I create the vector that contains the gradient of all orbitals
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is necessary (at least it won't harm) to add:
std::vector<mrchem::OrbitalVector> nablaPhi;
mrchem::OrbitalVector hessRho = hess(rho);
for (int i = 0; i < Phi.size(); i++) {
nablaPhi.push_back(nabla(Phi[i]));
}
nablaPhi.distribute();
( distribute will tell each element in the vector which rank it has)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, i will add this
I can recreate the error with this input:
I start the simulation with:
|
@moritzgubler would you mind adding also some testing to your code? According to the code coverage report, your patch is only marginally covered. Tests are essential to make sure the code does not get broken by others later on. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add some class documentation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ilfreddy I added some documentation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a general comment from my side: could you add some extra testing:
- There is no unit-testing as far as I can see
- There is only one regression test
Some classes could also benefit from some additional documentation.
As for the details of the code, I guess it is best if Stig or Peter comment on those.
I added a test, is there a way to look at the codecov report? |
This looks good to me now. Just a couple of minor questions.
@stigrj this branch includes a merge a few commits ago. Would you let it pass or should it be changed to a rebase? |
|
Thanks for the reference! Indeed we only have GGA at present, but we have some plans to make a libXC interface and thus include also metaGGA functionals. But I guess at this point it might be more relevant to focus on HF exchange. |
This is my implementation of the idea I had of calculating forces with surface integrals.
It works quite well and the forces are more accurate. Also, mrchem struggles to compute forces when the world precision is 1e-7 or smaller. Then it requires a lot of memory (more than 32 GB for a H2 molecule). This is not the case with my approach . Geometry optimizations seem to be reliable if the stopping criterion is chosen 10 * world_prec.
I have already put all the parameters into the parser, so it is quite simple to test and run my code (look for the section "- name: Forces" in the template.yaml input parser file).
It works with lda and gga functionals and for both closed and open shell systems.
At the moment, there is one thing that might be improved in the future: