Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SelfConsistentHubbardWorkChain: Missing relabeling when skipping relaxation #61

Closed
t-reents opened this issue Feb 12, 2024 · 5 comments · Fixed by #63
Closed

SelfConsistentHubbardWorkChain: Missing relabeling when skipping relaxation #61

t-reents opened this issue Feb 12, 2024 · 5 comments · Fixed by #63

Comments

@t-reents
Copy link
Collaborator

t-reents commented Feb 12, 2024

In case one makes use of the skip_relax_iterations input, the convergence check is skipped for those iterations as well. In the current logic, the new HubbardStructure is already updated in advance, without relabeling the initial HubbardStructure.

if not self.should_check_convergence():
self.ctx.current_hubbard_structure = workchain.outputs.hubbard_structure

I'm not sure why this is done prior to this if-statement:

if not self.inputs.meta_convergence:
self.report('meta convergence is switched off, so not checking convergence of Hubbard parameters.')
self.ctx.is_converged = True

@bastonero would be great to briefly explain this.

Since the current HubbardStructure is already updated, the relabeling step won't be triggered:

if not is_intersite_hubbard(workchain.outputs.hubbard_structure.hubbard):
for site in workchain.outputs.hubbard.dict.sites:
if not site['type'] == site['new_type']:
self.report('new types have been detected: relabeling the structure and starting new iteration.')
result = structure_relabel_kinds(
self.ctx.current_hubbard_structure, workchain.outputs.hubbard, self.ctx.current_magnetic_moments
)

I think this is the source of an error that I'm currently facing, as the HUBBARD-card in QE contains 4 different U-values but there is still only one kind in the HubbardStructure. Please correct me if I'm wrong with this assumption.

Edit: Therefore, the relabeling should also be performed in these cases.

Depending on the reasoning behind not including the update of the HubbardStructure in the second if statement, I would suggest to include it there, as this should resolve the issue. In case it is needed in the current logic, we should add something so that the relabeling is always done.

@bastonero
Copy link
Collaborator

I don't really understand what's the problem you're facing. When the meta_convergence is False means "do not check convergence, and stop the cycle". But you say you would still need the relabeling before output the hubbard_structure?

@t-reents
Copy link
Collaborator Author

t-reents commented Feb 12, 2024

The meta_convergence is still True, I'm only skipping the first relaxation. But as I'm skipping the relaxation, self.should_check_convergence() returns False after the first iteration and the relabeling is skipped as well. So the new HubbardStructure with the calculated U-values is passed to the PwRelaxWorkChain without relabeling, which causes problems.

So my point was that I would only update the self.ctx.current_hubbard_structure in advance if meta_convergence is False.

@bastonero
Copy link
Collaborator

I see the issue but I don't think the probelm is meta_convergence, that's not something that can change over the cycles.
I think one just needs to add the relabelling after the first if-statement, and export the logic of relabelling maybe in a utility function or something similar, so to not duplicate the code.

@t-reents
Copy link
Collaborator Author

t-reents commented Feb 12, 2024

We were actually talking about the same logic. I just misread the outline and thought that check_convergence is not wrapped in should_check_convergence (which already confused me :D). So I thought that check_convergence is called in any case, and the relabeling isn't done as the structure was already updated in inspect_hp, which wouldn't trigger a relabeling, as the kinds wouldn't differ.

This being said, I totally agree to reimplement this logic in a function and call it in the two scenarios accordingly. If you don't mind, I can open a PR to do so.

Sorry for the confusion!

P.S.: I updated my initial comment accordingly.

@bastonero
Copy link
Collaborator

Sure, please, go ahead, thanks !

bastonero added a commit that referenced this issue Apr 11, 2024
Fixes #61 

The `ctx.current_hubbard_structure` is not relabeled in `SelfConsistentHubbardWorkChain`
if `skip_relax_iterations != 0`. This can cause problems in the subsequent `PwRelaxWorkChains`
if new types are introduced. Therefore, the relabeling step is moved to a separate method that
 is called in any case.

Co-authored-by: Lorenzo <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants