Skip to content

Commit

Permalink
Merge branch 'master' into new/skip_relax_iterations
Browse files Browse the repository at this point in the history
  • Loading branch information
bastonero authored Dec 6, 2023
2 parents 5b20e18 + 0629c8e commit 8e16170
Show file tree
Hide file tree
Showing 17 changed files with 403 additions and 49 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# aiida-quantumespresso-hp
AiiDA plugin for the Hubbard module of Quantum ESPRESSO.
The plugin requires HP v6.5 or above and is not compatible with older versions.
The plugin requires HP v7.2 or above and is not compatible with older versions.

# Requirements
This package depends directly on `aiida-core>=2.0.0` and the `aiida-quantumespresso>=4.0.0` package.
40 changes: 27 additions & 13 deletions docs/source/1_computing_hubbard.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
"* __DFPT calculation__: use the {py:class}`~aiida_quantumespresso_hp.workflow.hp.base.HpBaseWorkChain` to do a self-consistent perturbation calculation to predict the Hubbard parameters.\n",
"\n",
"In this tutorial we will make use of the silicon structure to give you an overall understanding of the usage of the package.\n",
"If you are interested in more advanced features, please have a look at the [next tutorial](./2_parallel_hubbard.ipynb) or to the [how tos](../howto/index.rst).\n",
"If you are interested in more advanced features, please have a look at the [next tutorial](./2_parallel_hubbard.ipynb) or to the [how tos](howto).\n",
"\n",
"Let's get started!"
]
Expand Down Expand Up @@ -141,10 +141,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the desired interactions has been initialized correctly. This is important because ``hp.x`` needs to know which atoms need to be perturbed. As you will see later, the ``hp.x`` will take care of adding the remaining interactions with neighbouring atoms.\n",
"As you can see, the desired interactions have been initialized correctly. \n",
"This is important because `hp.x` needs to know which atoms need to be perturbed. \n",
"As you will see later, `hp.x` will take care of adding the remaining interactions with neighbouring atoms.\n",
"\n",
":::{important}\n",
"When you will use your own structures, make sure to have your 'Hubbard atoms' first in the list of atoms. This is due to the way the ``hp.x`` routine works internally, requiring those to be first. You can simply do this with the following snippet (IF THE NODE IS YET NOT STORED!):\n",
"When you use your own structures, make sure to have your 'Hubbard atoms' first in the list of atoms.\n",
"This is due to the way the `hp.x` routine works internally, requiring those to be first.\n",
"You can simply do this with the following snippet (IF THE NODE IS YET NOT STORED!):\n",
"\n",
"```python\n",
"from aiida_quantumespresso.utils.hubbard import HubbardUtils\n",
Expand All @@ -161,7 +165,7 @@
"## Calculating the SCF ground-state\n",
"\n",
"Now that we have defined the structure, we can calculate its ground-state via an SCF using the `PwBaseWorkChain`.\n",
"We can fill the inputs of the builder of the PwBaseWorkChain through the `get_builder_from_protocol`."
"We can fill the inputs of the builder of the `PwBaseWorkChain` through the `get_builder_from_protocol()` method."
]
},
{
Expand Down Expand Up @@ -197,7 +201,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can notice from the results, the workchain (actually, the `PwCalculation`!) has a `remote_folder` output namespace. This is what we need in order to run the `HpBaseWorkChain`. "
"As you can notice from the results, the workchain (actually, the `PwCalculation`!) has a `remote_folder` output.\n",
"This is what we need in order to run the `HpBaseWorkChain`. "
]
},
{
Expand All @@ -208,7 +213,7 @@
"## DFPT calculation of Hubbard parameters\n",
"\n",
"We can perturb the ground-state previously found to compute the Hubbard parameters.\n",
"Here we will need to use the `HpBaseWorkChain`, and link the `parent folder` previously produced."
"Here we will need to use the `HpBaseWorkChain`, and link the `remote_folder` previously produced via the `parent_scf` input."
]
},
{
Expand Down Expand Up @@ -286,12 +291,15 @@
"source": [
"## Final considerations\n",
"\n",
"We managed to compute the Hubbard parameters __fully__ ___ab initio___! :tada:\n",
"Although, as you could have noticed, there were some quite few passages to do by end. Moreover, there are the following considerations:\n",
"\n",
"1. For larger and more complex structures you will need to perturb many more atoms. Moreover, to get converged results you will need more the one q points. Clieck [here](./2_parallel_hubbard.ipynb). to learn how to parallelize over atoms and q points\n",
"2. To do a _full_ self-consistent calculation of these parameters, you should _relax_ your structure with the Hubbard parameters from the ``hp.x`` run, repeat the steps of this tutorial, relax _again_, and do this procedure over and over till convergence. Learn the automated way [here](./3_self_consistent.ipynb)!\n",
"We managed to compute the Hubbard parameters of LiCoO2 __fully__ ___ab initio___! :tada:\n",
"However, we had to execute quite a few steps manually, which can be tedious and error prone.\n",
"Moreover, there are the following considerations:\n",
"\n",
"1. For larger and more complex structures you will need to perturb many more atoms.\n",
" Moreover, to get converged results you will need more than one q point.\n",
" Click [here](./2_parallel_hubbard.ipynb) to learn how to parallelize over atoms and q points.\n",
"2. To do a _full_ self-consistent calculation of these parameters, you should _relax_ your structure with the Hubbard parameters from the `hp.x` run, repeat the steps of this tutorial, relax _again_, and do this procedure over and over till convergence.\n",
" Learn the automated way [here](./3_self_consistent.ipynb)!\n",
"\n",
":::{admonition} Learn more and in details\n",
":class: hint\n",
Expand All @@ -302,9 +310,15 @@
":::\n",
"\n",
":::{note}\n",
"We suggest to proceed first with the tutorial for point (1) and then the one for point (2). Nevertheless, tutorial (1) is not strictly necessary for (1).\n",
"We suggest to proceed first with the tutorial for point (1) and then the one for point (2). \n",
"Nevertheless, tutorial (1) is not strictly necessary for (2).\n",
":::"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
Expand All @@ -323,7 +337,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
"version": "3.10.13"
},
"orig_nbformat": 4,
"vscode": {
Expand Down
2 changes: 1 addition & 1 deletion docs/source/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ To the reference guides

If you use this plugin for your research, please cite the following work:

> Lorenzo Bastonero, Cristiano Malica, Marnik Bercx, Eric Macke, Iurii Timrov, Nicola Marzari, and Sebastiaan P. Huber, [*Automated self-consistent prediction of extended Hubbard parameters for Li-ion batteries*](), npj Comp. Mat., **?**, ? (2023)
> Lorenzo Bastonero, Cristiano Malica, Marnik Bercx, Eric Macke, Iurii Timrov, Nicola Marzari, and Sebastiaan P. Huber, [*Automated self-consistent prediction of extended Hubbard parameters for Li-ion batteries*](https://media.giphy.com/media/zyclIRxMwlY40/giphy.gif), npj Comp. Mat., **?**, ? (2023)
> Sebastiaan. P. Huber, Spyros Zoupanos, Martin Uhrin, Leopold Talirz, Leonid Kahle, Rico Häuselmann, Dominik Gresch, Tiziano Müller, Aliaksandr V. Yakutovich, Casper W. Andersen, Francisco F. Ramirez, Carl S. Adorf, Fernando Gargiulo, Snehal Kumbhar, Elsa Passaro, Conrad Johnston, Andrius Merkys, Andrea Cepellotti, Nicolas Mounet, Nicola Marzari, Boris Kozinsky, and Giovanni Pizzi, [*AiiDA 1.0, a scalable computational infrastructure for automated reproducible workflows and data provenance*](https://doi.org/10.1038/s41597-020-00638-4), Scientific Data **7**, 300 (2020)
Expand Down
6 changes: 3 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -37,15 +37,15 @@ Documentation = 'https://aiida-quantumespresso-hp.readthedocs.io'

[project.optional-dependencies]
docs = [
'myst-nb~=0.17',
'myst-nb~=1.0',
'jupytext>=1.11.2,<1.15.0',
'sphinx-togglebutton',
'sphinx~=5.2',
'sphinx~=6.2',
'sphinx-copybutton~=0.5.2',
'sphinx-book-theme~=1.0.1',
'sphinx-design~=0.4.1',
'sphinxcontrib-details-directive~=0.1.0',
'sphinx-autoapi~=2.0.1',
'sphinx-autoapi~=3.0',
]
pre-commit = [
'pre-commit~=2.17',
Expand Down
4 changes: 4 additions & 0 deletions src/aiida_quantumespresso_hp/calculations/hp.py
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,10 @@ def define(cls, spec):
message='The electronic minimization cycle did not reach self-consistency.')
spec.exit_code(462, 'ERROR_COMPUTING_CHOLESKY',
message='The code failed during the cholesky factorization.')
spec.exit_code(490, 'ERROR_MISSING_CHI_MATRICES',
message='The code failed to reconstruct the full chi matrix as some chi matrices were missing')
spec.exit_code(495, 'ERROR_INCOMPATIBLE_FFT_GRID',
message='The code failed due to incompatibility between the FFT grid and the parallelization options.')

@classproperty
def filename_output_hubbard_chi(cls): # pylint: disable=no-self-argument
Expand Down
26 changes: 22 additions & 4 deletions src/aiida_quantumespresso_hp/parsers/hp.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,24 @@ class HpParser(Parser):

def parse(self, **kwargs):
"""Parse the contents of the output files retrieved in the `FolderData`."""
self.exit_code_stdout = None # pylint: disable=attribute-defined-outside-init

try:
self.retrieved
except exceptions.NotExistent:
return self.exit_codes.ERROR_NO_RETRIEVED_FOLDER

# The stdout is always parsed by default.
exit_code = self.parse_stdout()
logs = self.parse_stdout()

# Check for specific known problems that can cause a pre-mature termination of the calculation
exit_code = self.validate_premature_exit(logs)
if exit_code:
return exit_code

if self.exit_code_stdout:
return self.exit_code_stdout

# If it only initialized, then we do NOT parse the `{prefix}.Hubbard_parameters.dat``
# and the {prefix}.chi.dat files.
# This check is needed since the `hp.x` routine will print the `{prefix}.Hubbard_parameters.dat`
Expand Down Expand Up @@ -88,7 +96,7 @@ def parse_stdout(self):
Parse the output parameters from the output of a Hp calculation written to standard out.
:return: optional exit code in case of an error
:return: log messages
"""
from .parse_raw.hp import parse_raw_output

Expand All @@ -109,14 +117,24 @@ def parse_stdout(self):
else:
self.out('parameters', orm.Dict(parsed_data))

# If the stdout was incomplete, most likely the job was interrupted before it could cleanly finish, so the
# output files are most likely corrupt and cannot be restarted from
if 'ERROR_OUTPUT_STDOUT_INCOMPLETE' in logs['error']:
self.exit_code_stdout = self.exit_codes.ERROR_OUTPUT_STDOUT_INCOMPLETE # pylint: disable=attribute-defined-outside-init

return logs

def validate_premature_exit(self, logs):
"""Analyze problems that will cause a pre-mature termination of the calculation, controlled or not."""
for exit_status in [
'ERROR_OUT_OF_WALLTIME',
'ERROR_INVALID_NAMELIST',
'ERROR_INCORRECT_ORDER_ATOMIC_POSITIONS',
'ERROR_MISSING_PERTURBATION_FILE',
'ERROR_CONVERGENCE_NOT_REACHED',
'ERROR_OUT_OF_WALLTIME',
'ERROR_COMPUTING_CHOLESKY',
'ERROR_OUTPUT_STDOUT_INCOMPLETE',
'ERROR_MISSING_CHI_MATRICES',
'ERROR_INCOMPATIBLE_FFT_GRID',
]:
if exit_status in logs['error']:
return self.exit_codes.get(exit_status)
Expand Down
4 changes: 3 additions & 1 deletion src/aiida_quantumespresso_hp/parsers/parse_raw/hp.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ def parse_raw_output(stdout):
detect_important_message(logs, line)

# A calculation that will only perturb a single atom will only print one line
match = re.search(r'.*The grid of q-points.*\s+([0-9])+\s+q-points.*', line)
match = re.search(r'.*The grid of q-points.*\s+([0-9]+)+\s+q-points.*', line)
if match:
parsed_data['number_of_qpoints'] = int(match.group(1))

Expand Down Expand Up @@ -84,6 +84,8 @@ def detect_important_message(logs, line):
'Maximum CPU time exceeded': 'ERROR_OUT_OF_WALLTIME',
'reading inputhp namelist': 'ERROR_INVALID_NAMELIST',
'problems computing cholesky': 'ERROR_COMPUTING_CHOLESKY',
'Reconstruction problem: some chi were not found': 'ERROR_MISSING_CHI_MATRICES',
'incompatible FFT grid': 'ERROR_INCOMPATIBLE_FFT_GRID',
REG_ERROR_CONVERGENCE_NOT_REACHED: 'ERROR_CONVERGENCE_NOT_REACHED',
ERROR_POSITIONS: 'ERROR_INCORRECT_ORDER_ATOMIC_POSITIONS'
},
Expand Down
2 changes: 1 addition & 1 deletion src/aiida_quantumespresso_hp/utils/general.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ def is_perturb_only_atom(parameters: dict) -> int | None:
match = None # making sure that if the dictionary is empty we don't raise an `UnboundLocalError`

for key in parameters.keys():
match = re.search(r'perturb_only_atom.*([0-9]).*', key)
match = re.search(r'perturb_only_atom.*?(\d+).*', key)
if match:
if not parameters[key]: # also the key must be `True`
match = None # making sure to have `None`
Expand Down
39 changes: 23 additions & 16 deletions src/aiida_quantumespresso_hp/workflows/hubbard.py
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,7 @@ def define(cls, spec):
spec.inputs.validator = validate_inputs
spec.inputs['hubbard']['hp'].validator = None

# yapf: disable
spec.outline(
cls.setup,
while_(cls.should_run_iteration)(
Expand All @@ -159,9 +160,13 @@ def define(cls, spec):
if_(cls.should_check_convergence)(
cls.check_convergence,
),
if_(cls.should_clean_workdir)(
cls.clean_iteration,
),
),
cls.run_results,
)
# yapf: enable

spec.output('hubbard_structure', valid_type=HubbardStructureData, required=False,
help='The Hubbard structure containing the structure and associated Hubbard parameters.')
Expand Down Expand Up @@ -578,6 +583,8 @@ def inspect_hp(self):

def check_convergence(self):
"""Check the convergence of the Hubbard parameters."""
from aiida_quantumespresso.utils.hubbard import is_intersite_hubbard

workchain = self.ctx.workchains_hp[-1]

# We store in memory the parameters before relabelling to make the comparison easier.
Expand All @@ -594,15 +601,17 @@ def check_convergence(self):
# We check if new types were created, in which case we relabel the `HubbardStructureData`
self.ctx.current_hubbard_structure = workchain.outputs.hubbard_structure

for site in workchain.outputs.hubbard.dict.sites:
if not site['type'] == site['new_type']:
self.report('new types have been detected: relabeling the structure and starting new iteration.')
result = structure_relabel_kinds(
self.ctx.current_hubbard_structure, workchain.outputs.hubbard, self.ctx.current_magnetic_moments
)
self.ctx.current_hubbard_structure = result['hubbard_structure']
if self.ctx.current_magnetic_moments is not None:
self.ctx.current_magnetic_moments = result['starting_magnetization']
if not is_intersite_hubbard(workchain.outputs.hubbard_structure.hubbard):
for site in workchain.outputs.hubbard.dict.sites:
if not site['type'] == site['new_type']:
self.report('new types have been detected: relabeling the structure and starting new iteration.')
result = structure_relabel_kinds(
self.ctx.current_hubbard_structure, workchain.outputs.hubbard, self.ctx.current_magnetic_moments
)
self.ctx.current_hubbard_structure = result['hubbard_structure']
if self.ctx.current_magnetic_moments is not None:
self.ctx.current_magnetic_moments = result['starting_magnetization']
break

if not len(ref_params) == len(new_params):
self.report('The new and old Hubbard parameters have different lenghts. Assuming to be at the first cycle.')
Expand Down Expand Up @@ -642,14 +651,12 @@ def run_results(self):
self.report(f'Hubbard parameters self-consistently converged in {self.ctx.iteration} iterations')
self.out('hubbard_structure', self.ctx.current_hubbard_structure)

def on_terminated(self):
"""Clean the working directories of all child calculations if `clean_workdir=True` in the inputs."""
super().on_terminated()

if self.inputs.clean_workdir.value is False:
self.report('remote folders will not be cleaned')
return
def should_clean_workdir(self):
"""Whether to clean the work directories at each iteration."""
return self.inputs.clean_workdir.value

def clean_iteration(self):
"""Clean all work directiories of the current iteration."""
cleaned_calcs = []

for called_descendant in self.node.called_descendants:
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine scale_sym_ops (3):
incompatible FFT grid
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

stopping ...
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine reconstruct_full_chi (1):
Reconstruction problem: some chi were not found
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

stopping ...
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
&INPUTHP
conv_thr_chi = 1.0000000000d-06
determine_q_mesh_only = .true.
find_atpert = 3
iverbosity = 2
max_seconds = 3.4200000000d+03
nq1 = 4
nq2 = 4
nq3 = 4
outdir = 'out'
perturb_only_atom(13) = .true.
prefix = 'aiida'
/
Loading

0 comments on commit 8e16170

Please sign in to comment.