diff --git a/README.md b/README.md index 1c6e2f9..8253251 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # aiida-quantumespresso-hp AiiDA plugin for the Hubbard module of Quantum ESPRESSO. -The plugin requires HP v6.5 or above and is not compatible with older versions. +The plugin requires HP v7.2 or above and is not compatible with older versions. # Requirements This package depends directly on `aiida-core>=2.0.0` and the `aiida-quantumespresso>=4.0.0` package. diff --git a/docs/source/1_computing_hubbard.ipynb b/docs/source/1_computing_hubbard.ipynb index cba2ae4..431664f 100644 --- a/docs/source/1_computing_hubbard.ipynb +++ b/docs/source/1_computing_hubbard.ipynb @@ -19,7 +19,7 @@ "* __DFPT calculation__: use the {py:class}`~aiida_quantumespresso_hp.workflow.hp.base.HpBaseWorkChain` to do a self-consistent perturbation calculation to predict the Hubbard parameters.\n", "\n", "In this tutorial we will make use of the silicon structure to give you an overall understanding of the usage of the package.\n", - "If you are interested in more advanced features, please have a look at the [next tutorial](./2_parallel_hubbard.ipynb) or to the [how tos](../howto/index.rst).\n", + "If you are interested in more advanced features, please have a look at the [next tutorial](./2_parallel_hubbard.ipynb) or to the [how tos](howto).\n", "\n", "Let's get started!" ] @@ -141,10 +141,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "As you can see, the desired interactions has been initialized correctly. This is important because ``hp.x`` needs to know which atoms need to be perturbed. As you will see later, the ``hp.x`` will take care of adding the remaining interactions with neighbouring atoms.\n", + "As you can see, the desired interactions have been initialized correctly. \n", + "This is important because `hp.x` needs to know which atoms need to be perturbed. \n", + "As you will see later, `hp.x` will take care of adding the remaining interactions with neighbouring atoms.\n", "\n", ":::{important}\n", - "When you will use your own structures, make sure to have your 'Hubbard atoms' first in the list of atoms. This is due to the way the ``hp.x`` routine works internally, requiring those to be first. You can simply do this with the following snippet (IF THE NODE IS YET NOT STORED!):\n", + "When you use your own structures, make sure to have your 'Hubbard atoms' first in the list of atoms.\n", + "This is due to the way the `hp.x` routine works internally, requiring those to be first.\n", + "You can simply do this with the following snippet (IF THE NODE IS YET NOT STORED!):\n", "\n", "```python\n", "from aiida_quantumespresso.utils.hubbard import HubbardUtils\n", @@ -161,7 +165,7 @@ "## Calculating the SCF ground-state\n", "\n", "Now that we have defined the structure, we can calculate its ground-state via an SCF using the `PwBaseWorkChain`.\n", - "We can fill the inputs of the builder of the PwBaseWorkChain through the `get_builder_from_protocol`." + "We can fill the inputs of the builder of the `PwBaseWorkChain` through the `get_builder_from_protocol()` method." ] }, { @@ -197,7 +201,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "As you can notice from the results, the workchain (actually, the `PwCalculation`!) has a `remote_folder` output namespace. This is what we need in order to run the `HpBaseWorkChain`. " + "As you can notice from the results, the workchain (actually, the `PwCalculation`!) has a `remote_folder` output.\n", + "This is what we need in order to run the `HpBaseWorkChain`. " ] }, { @@ -208,7 +213,7 @@ "## DFPT calculation of Hubbard parameters\n", "\n", "We can perturb the ground-state previously found to compute the Hubbard parameters.\n", - "Here we will need to use the `HpBaseWorkChain`, and link the `parent folder` previously produced." + "Here we will need to use the `HpBaseWorkChain`, and link the `remote_folder` previously produced via the `parent_scf` input." ] }, { @@ -286,12 +291,15 @@ "source": [ "## Final considerations\n", "\n", - "We managed to compute the Hubbard parameters __fully__ ___ab initio___! :tada:\n", - "Although, as you could have noticed, there were some quite few passages to do by end. Moreover, there are the following considerations:\n", - "\n", - "1. For larger and more complex structures you will need to perturb many more atoms. Moreover, to get converged results you will need more the one q points. Clieck [here](./2_parallel_hubbard.ipynb). to learn how to parallelize over atoms and q points\n", - "2. To do a _full_ self-consistent calculation of these parameters, you should _relax_ your structure with the Hubbard parameters from the ``hp.x`` run, repeat the steps of this tutorial, relax _again_, and do this procedure over and over till convergence. Learn the automated way [here](./3_self_consistent.ipynb)!\n", + "We managed to compute the Hubbard parameters of LiCoO2 __fully__ ___ab initio___! :tada:\n", + "However, we had to execute quite a few steps manually, which can be tedious and error prone.\n", + "Moreover, there are the following considerations:\n", "\n", + "1. For larger and more complex structures you will need to perturb many more atoms.\n", + " Moreover, to get converged results you will need more than one q point.\n", + " Click [here](./2_parallel_hubbard.ipynb) to learn how to parallelize over atoms and q points.\n", + "2. To do a _full_ self-consistent calculation of these parameters, you should _relax_ your structure with the Hubbard parameters from the `hp.x` run, repeat the steps of this tutorial, relax _again_, and do this procedure over and over till convergence.\n", + " Learn the automated way [here](./3_self_consistent.ipynb)!\n", "\n", ":::{admonition} Learn more and in details\n", ":class: hint\n", @@ -302,9 +310,15 @@ ":::\n", "\n", ":::{note}\n", - "We suggest to proceed first with the tutorial for point (1) and then the one for point (2). Nevertheless, tutorial (1) is not strictly necessary for (1).\n", + "We suggest to proceed first with the tutorial for point (1) and then the one for point (2). \n", + "Nevertheless, tutorial (1) is not strictly necessary for (2).\n", ":::" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [] } ], "metadata": { @@ -323,7 +337,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.10" + "version": "3.10.13" }, "orig_nbformat": 4, "vscode": { diff --git a/docs/source/index.md b/docs/source/index.md index a578ca6..8cd20c9 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -155,7 +155,7 @@ To the reference guides If you use this plugin for your research, please cite the following work: -> Lorenzo Bastonero, Cristiano Malica, Marnik Bercx, Eric Macke, Iurii Timrov, Nicola Marzari, and Sebastiaan P. Huber, [*Automated self-consistent prediction of extended Hubbard parameters for Li-ion batteries*](), npj Comp. Mat., **?**, ? (2023) +> Lorenzo Bastonero, Cristiano Malica, Marnik Bercx, Eric Macke, Iurii Timrov, Nicola Marzari, and Sebastiaan P. Huber, [*Automated self-consistent prediction of extended Hubbard parameters for Li-ion batteries*](https://media.giphy.com/media/zyclIRxMwlY40/giphy.gif), npj Comp. Mat., **?**, ? (2023) > Sebastiaan. P. Huber, Spyros Zoupanos, Martin Uhrin, Leopold Talirz, Leonid Kahle, Rico Häuselmann, Dominik Gresch, Tiziano Müller, Aliaksandr V. Yakutovich, Casper W. Andersen, Francisco F. Ramirez, Carl S. Adorf, Fernando Gargiulo, Snehal Kumbhar, Elsa Passaro, Conrad Johnston, Andrius Merkys, Andrea Cepellotti, Nicolas Mounet, Nicola Marzari, Boris Kozinsky, and Giovanni Pizzi, [*AiiDA 1.0, a scalable computational infrastructure for automated reproducible workflows and data provenance*](https://doi.org/10.1038/s41597-020-00638-4), Scientific Data **7**, 300 (2020) diff --git a/pyproject.toml b/pyproject.toml index be3d813..47ad3fc 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -37,15 +37,15 @@ Documentation = 'https://aiida-quantumespresso-hp.readthedocs.io' [project.optional-dependencies] docs = [ - 'myst-nb~=0.17', + 'myst-nb~=1.0', 'jupytext>=1.11.2,<1.15.0', 'sphinx-togglebutton', - 'sphinx~=5.2', + 'sphinx~=6.2', 'sphinx-copybutton~=0.5.2', 'sphinx-book-theme~=1.0.1', 'sphinx-design~=0.4.1', 'sphinxcontrib-details-directive~=0.1.0', - 'sphinx-autoapi~=2.0.1', + 'sphinx-autoapi~=3.0', ] pre-commit = [ 'pre-commit~=2.17', diff --git a/src/aiida_quantumespresso_hp/calculations/hp.py b/src/aiida_quantumespresso_hp/calculations/hp.py index f4b49aa..b2fd397 100644 --- a/src/aiida_quantumespresso_hp/calculations/hp.py +++ b/src/aiida_quantumespresso_hp/calculations/hp.py @@ -191,6 +191,10 @@ def define(cls, spec): message='The electronic minimization cycle did not reach self-consistency.') spec.exit_code(462, 'ERROR_COMPUTING_CHOLESKY', message='The code failed during the cholesky factorization.') + spec.exit_code(490, 'ERROR_MISSING_CHI_MATRICES', + message='The code failed to reconstruct the full chi matrix as some chi matrices were missing') + spec.exit_code(495, 'ERROR_INCOMPATIBLE_FFT_GRID', + message='The code failed due to incompatibility between the FFT grid and the parallelization options.') @classproperty def filename_output_hubbard_chi(cls): # pylint: disable=no-self-argument diff --git a/src/aiida_quantumespresso_hp/parsers/hp.py b/src/aiida_quantumespresso_hp/parsers/hp.py index 35ed8db..1c039f2 100644 --- a/src/aiida_quantumespresso_hp/parsers/hp.py +++ b/src/aiida_quantumespresso_hp/parsers/hp.py @@ -15,16 +15,24 @@ class HpParser(Parser): def parse(self, **kwargs): """Parse the contents of the output files retrieved in the `FolderData`.""" + self.exit_code_stdout = None # pylint: disable=attribute-defined-outside-init + try: self.retrieved except exceptions.NotExistent: return self.exit_codes.ERROR_NO_RETRIEVED_FOLDER # The stdout is always parsed by default. - exit_code = self.parse_stdout() + logs = self.parse_stdout() + + # Check for specific known problems that can cause a pre-mature termination of the calculation + exit_code = self.validate_premature_exit(logs) if exit_code: return exit_code + if self.exit_code_stdout: + return self.exit_code_stdout + # If it only initialized, then we do NOT parse the `{prefix}.Hubbard_parameters.dat`` # and the {prefix}.chi.dat files. # This check is needed since the `hp.x` routine will print the `{prefix}.Hubbard_parameters.dat` @@ -88,7 +96,7 @@ def parse_stdout(self): Parse the output parameters from the output of a Hp calculation written to standard out. - :return: optional exit code in case of an error + :return: log messages """ from .parse_raw.hp import parse_raw_output @@ -109,14 +117,24 @@ def parse_stdout(self): else: self.out('parameters', orm.Dict(parsed_data)) + # If the stdout was incomplete, most likely the job was interrupted before it could cleanly finish, so the + # output files are most likely corrupt and cannot be restarted from + if 'ERROR_OUTPUT_STDOUT_INCOMPLETE' in logs['error']: + self.exit_code_stdout = self.exit_codes.ERROR_OUTPUT_STDOUT_INCOMPLETE # pylint: disable=attribute-defined-outside-init + + return logs + + def validate_premature_exit(self, logs): + """Analyze problems that will cause a pre-mature termination of the calculation, controlled or not.""" for exit_status in [ + 'ERROR_OUT_OF_WALLTIME', 'ERROR_INVALID_NAMELIST', 'ERROR_INCORRECT_ORDER_ATOMIC_POSITIONS', 'ERROR_MISSING_PERTURBATION_FILE', 'ERROR_CONVERGENCE_NOT_REACHED', - 'ERROR_OUT_OF_WALLTIME', 'ERROR_COMPUTING_CHOLESKY', - 'ERROR_OUTPUT_STDOUT_INCOMPLETE', + 'ERROR_MISSING_CHI_MATRICES', + 'ERROR_INCOMPATIBLE_FFT_GRID', ]: if exit_status in logs['error']: return self.exit_codes.get(exit_status) diff --git a/src/aiida_quantumespresso_hp/parsers/parse_raw/hp.py b/src/aiida_quantumespresso_hp/parsers/parse_raw/hp.py index 6b4f101..d26ddad 100644 --- a/src/aiida_quantumespresso_hp/parsers/parse_raw/hp.py +++ b/src/aiida_quantumespresso_hp/parsers/parse_raw/hp.py @@ -31,7 +31,7 @@ def parse_raw_output(stdout): detect_important_message(logs, line) # A calculation that will only perturb a single atom will only print one line - match = re.search(r'.*The grid of q-points.*\s+([0-9])+\s+q-points.*', line) + match = re.search(r'.*The grid of q-points.*\s+([0-9]+)+\s+q-points.*', line) if match: parsed_data['number_of_qpoints'] = int(match.group(1)) @@ -84,6 +84,8 @@ def detect_important_message(logs, line): 'Maximum CPU time exceeded': 'ERROR_OUT_OF_WALLTIME', 'reading inputhp namelist': 'ERROR_INVALID_NAMELIST', 'problems computing cholesky': 'ERROR_COMPUTING_CHOLESKY', + 'Reconstruction problem: some chi were not found': 'ERROR_MISSING_CHI_MATRICES', + 'incompatible FFT grid': 'ERROR_INCOMPATIBLE_FFT_GRID', REG_ERROR_CONVERGENCE_NOT_REACHED: 'ERROR_CONVERGENCE_NOT_REACHED', ERROR_POSITIONS: 'ERROR_INCORRECT_ORDER_ATOMIC_POSITIONS' }, diff --git a/src/aiida_quantumespresso_hp/utils/general.py b/src/aiida_quantumespresso_hp/utils/general.py index 3566bac..99c9fd0 100644 --- a/src/aiida_quantumespresso_hp/utils/general.py +++ b/src/aiida_quantumespresso_hp/utils/general.py @@ -28,7 +28,7 @@ def is_perturb_only_atom(parameters: dict) -> int | None: match = None # making sure that if the dictionary is empty we don't raise an `UnboundLocalError` for key in parameters.keys(): - match = re.search(r'perturb_only_atom.*([0-9]).*', key) + match = re.search(r'perturb_only_atom.*?(\d+).*', key) if match: if not parameters[key]: # also the key must be `True` match = None # making sure to have `None` diff --git a/src/aiida_quantumespresso_hp/workflows/hubbard.py b/src/aiida_quantumespresso_hp/workflows/hubbard.py index c4ab9ff..739e528 100644 --- a/src/aiida_quantumespresso_hp/workflows/hubbard.py +++ b/src/aiida_quantumespresso_hp/workflows/hubbard.py @@ -140,6 +140,7 @@ def define(cls, spec): spec.inputs.validator = validate_inputs spec.inputs['hubbard']['hp'].validator = None + # yapf: disable spec.outline( cls.setup, while_(cls.should_run_iteration)( @@ -159,9 +160,13 @@ def define(cls, spec): if_(cls.should_check_convergence)( cls.check_convergence, ), + if_(cls.should_clean_workdir)( + cls.clean_iteration, + ), ), cls.run_results, ) + # yapf: enable spec.output('hubbard_structure', valid_type=HubbardStructureData, required=False, help='The Hubbard structure containing the structure and associated Hubbard parameters.') @@ -578,6 +583,8 @@ def inspect_hp(self): def check_convergence(self): """Check the convergence of the Hubbard parameters.""" + from aiida_quantumespresso.utils.hubbard import is_intersite_hubbard + workchain = self.ctx.workchains_hp[-1] # We store in memory the parameters before relabelling to make the comparison easier. @@ -594,15 +601,17 @@ def check_convergence(self): # We check if new types were created, in which case we relabel the `HubbardStructureData` self.ctx.current_hubbard_structure = workchain.outputs.hubbard_structure - for site in workchain.outputs.hubbard.dict.sites: - if not site['type'] == site['new_type']: - self.report('new types have been detected: relabeling the structure and starting new iteration.') - result = structure_relabel_kinds( - self.ctx.current_hubbard_structure, workchain.outputs.hubbard, self.ctx.current_magnetic_moments - ) - self.ctx.current_hubbard_structure = result['hubbard_structure'] - if self.ctx.current_magnetic_moments is not None: - self.ctx.current_magnetic_moments = result['starting_magnetization'] + if not is_intersite_hubbard(workchain.outputs.hubbard_structure.hubbard): + for site in workchain.outputs.hubbard.dict.sites: + if not site['type'] == site['new_type']: + self.report('new types have been detected: relabeling the structure and starting new iteration.') + result = structure_relabel_kinds( + self.ctx.current_hubbard_structure, workchain.outputs.hubbard, self.ctx.current_magnetic_moments + ) + self.ctx.current_hubbard_structure = result['hubbard_structure'] + if self.ctx.current_magnetic_moments is not None: + self.ctx.current_magnetic_moments = result['starting_magnetization'] + break if not len(ref_params) == len(new_params): self.report('The new and old Hubbard parameters have different lenghts. Assuming to be at the first cycle.') @@ -642,14 +651,12 @@ def run_results(self): self.report(f'Hubbard parameters self-consistently converged in {self.ctx.iteration} iterations') self.out('hubbard_structure', self.ctx.current_hubbard_structure) - def on_terminated(self): - """Clean the working directories of all child calculations if `clean_workdir=True` in the inputs.""" - super().on_terminated() - - if self.inputs.clean_workdir.value is False: - self.report('remote folders will not be cleaned') - return + def should_clean_workdir(self): + """Whether to clean the work directories at each iteration.""" + return self.inputs.clean_workdir.value + def clean_iteration(self): + """Clean all work directiories of the current iteration.""" cleaned_calcs = [] for called_descendant in self.node.called_descendants: diff --git a/tests/parsers/fixtures/hp/failed_incompatible_fft_grid/aiida.out b/tests/parsers/fixtures/hp/failed_incompatible_fft_grid/aiida.out new file mode 100644 index 0000000..b8baff4 --- /dev/null +++ b/tests/parsers/fixtures/hp/failed_incompatible_fft_grid/aiida.out @@ -0,0 +1,7 @@ + + %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + Error in routine scale_sym_ops (3): + incompatible FFT grid + %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + + stopping ... diff --git a/tests/parsers/fixtures/hp/failed_missing_chi_matrices/aiida.out b/tests/parsers/fixtures/hp/failed_missing_chi_matrices/aiida.out new file mode 100644 index 0000000..12ae124 --- /dev/null +++ b/tests/parsers/fixtures/hp/failed_missing_chi_matrices/aiida.out @@ -0,0 +1,7 @@ + + %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + Error in routine reconstruct_full_chi (1): + Reconstruction problem: some chi were not found + %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + + stopping ... diff --git a/tests/parsers/fixtures/hp/initialization_only_mesh_more_points/aiida.in b/tests/parsers/fixtures/hp/initialization_only_mesh_more_points/aiida.in new file mode 100644 index 0000000..bd64f89 --- /dev/null +++ b/tests/parsers/fixtures/hp/initialization_only_mesh_more_points/aiida.in @@ -0,0 +1,13 @@ +&INPUTHP + conv_thr_chi = 1.0000000000d-06 + determine_q_mesh_only = .true. + find_atpert = 3 + iverbosity = 2 + max_seconds = 3.4200000000d+03 + nq1 = 4 + nq2 = 4 + nq3 = 4 + outdir = 'out' + perturb_only_atom(13) = .true. + prefix = 'aiida' +/ diff --git a/tests/parsers/fixtures/hp/initialization_only_mesh_more_points/aiida.out b/tests/parsers/fixtures/hp/initialization_only_mesh_more_points/aiida.out new file mode 100644 index 0000000..a5b4b02 --- /dev/null +++ b/tests/parsers/fixtures/hp/initialization_only_mesh_more_points/aiida.out @@ -0,0 +1,234 @@ + Program HP v.7.2 starts on 5Jun2023 at 13:54:28 + + This program is part of the open-source Quantum ESPRESSO suite + for quantum simulation of materials; please cite + "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009); + "P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017); + "P. Giannozzi et al., J. Chem. Phys. 152 154105 (2020); + URL http://www.quantum-espresso.org", + in publications or presentations arising from this work. More details at + http://www.quantum-espresso.org/quote + + Parallel version (MPI), running on 48 processors + + MPI processes distributed on 1 nodes + K-points division: npool = 16 + R & G space division: proc/nbgrp/npool/nimage = 3 + 24962 MiB available memory on the printing compute node when the environment starts + + + =---------------------------------------------------------------------------= + + Calculation of Hubbard parameters using the HP code based on DFPT + + Please cite the following papers when using this program: + + - HP code : Comput. Phys. Commun. 279, 108455 (2022). + + - Theory : Phys. Rev. B 98, 085127 (2018) and + + Phys. Rev. B 103, 045141 (2021). + + =-----------------------------------------------------------------------------= + + Reading xml data from directory: + + out/aiida.save/ + + IMPORTANT: XC functional enforced from input : + Exchange-correlation= PBESOL + ( 1 4 10 8 0 0 0) + Any further DFT definition will be discarded + Please, verify this is what you really want + + + Parallelization info + -------------------- + sticks: dense smooth PW G-vecs: dense smooth PW + Min 905 905 271 30463 30463 5034 + Max 906 906 271 30465 30465 5035 + Sum 2717 2717 813 91393 91393 15103 + + Using Slab Decomposition + + + Check: negative core charge= -0.000036 + Reading collected, re-writing distributed wavefunctions + + + bravais-lattice index = 0 + lattice parameter (alat) = 12.3672 (a.u.) + unit-cell volume = 1453.9300 (a.u.)^3 + number of atoms/cell = 16 + number of atomic types = 2 + kinetic-energy cut-off = 60.00 (Ry) + charge density cut-off = 240.00 (Ry) + conv. thresh. for NSCF = 1.0E-11 + conv. thresh. for chi = 1.0E-06 + Input Hubbard parameters (in eV): + V ( 1, 1) = 5.0000 + V ( 1, 229) = 0.0000 + V ( 1, 230) = 0.0000 + V ( 2, 2) = 5.0000 + V ( 2, 6) = 0.0000 + V ( 2, 405) = 0.0000 + V ( 3, 3) = 5.0000 + V ( 4, 4) = 5.0000 + V ( 5, 5) = 0.0000 + V ( 5, 34) = 0.0000 + V ( 5, 209) = 0.0000 + V ( 6, 2) = 0.0000 + V ( 6, 6) = 0.0000 + V ( 6, 209) = 0.0000 + V ( 7, 7) = 0.0000 + V ( 8, 8) = 0.0000 + V ( 9, 9) = 0.0000 + V ( 10, 10) = 0.0000 + V ( 11, 11) = 0.0000 + V ( 12, 12) = 0.0000 + V ( 13, 13) = 0.0000 + V ( 14, 14) = 0.0000 + V ( 15, 15) = 0.0000 + V ( 16, 16) = 0.0000 + + celldm(1) = 12.36723 celldm(2) = 0.00000 celldm(3) = 0.00000 + celldm(4) = 0.00000 celldm(5) = 0.00000 celldm(6) = 0.00000 + + crystal axes: (cart. coord. in units of alat) + a(1) = ( 0.5609 0.5784 0.5924 ) + a(2) = ( 0.5609 -0.5784 -0.5924 ) + a(3) = ( -0.5609 0.5784 -0.5924 ) + + reciprocal axes: (cart. coord. in units 2 pi/alat) + b(1) = ( 0.8915 0.8645 0.0000 ) + b(2) = ( 0.8915 0.0000 -0.8440 ) + b(3) = ( 0.0000 0.8645 -0.8440 ) + + Atoms inside the unit cell (Cartesian axes): + site n. atom mass positions (alat units) + 1 W 183.8400 tau( 1) = ( -0.28043 0.57835 -0.55568 ) + 2 W 183.8400 tau( 2) = ( 0.84128 0.00000 -0.03673 ) + 3 W 183.8400 tau( 3) = ( 0.28043 0.00000 -0.55450 ) + 4 W 183.8400 tau( 4) = ( 0.28043 0.57835 -0.03791 ) + 5 O 15.9994 tau( 5) = ( 0.00000 0.00000 0.00000 ) + 6 O 15.9994 tau( 6) = ( 0.56086 0.00000 0.00000 ) + 7 O 15.9994 tau( 7) = ( 0.56086 0.57835 0.00000 ) + 8 O 15.9994 tau( 8) = ( 0.00000 0.57835 0.00000 ) + 9 O 15.9994 tau( 9) = ( 0.28043 0.28397 0.03317 ) + 10 O 15.9994 tau( 10) = ( 0.28043 0.29438 -0.62558 ) + 11 O 15.9994 tau( 11) = ( -0.28043 0.29438 -0.55923 ) + 12 O 15.9994 tau( 12) = ( 0.84128 0.28397 -0.03317 ) + 13 O 15.9994 tau( 13) = ( 0.28043 0.00000 -0.87440 ) + 14 O 15.9994 tau( 14) = ( 0.28043 0.57835 0.28199 ) + 15 O 15.9994 tau( 15) = ( 0.28043 0.00000 -0.26540 ) + 16 O 15.9994 tau( 16) = ( 0.28043 0.57835 -0.32701 ) + + Atom which will be perturbed: + + 13 O 15.9994 tau(13) = ( 0.28043 0.00000 -0.87440 ) + + Reading xml data from directory: + + out/aiida.save/ + + IMPORTANT: XC functional enforced from input : + Exchange-correlation= PBESOL + ( 1 4 10 8 0 0 0) + Any further DFT definition will be discarded + Please, verify this is what you really want + + + Parallelization info + -------------------- + sticks: dense smooth PW G-vecs: dense smooth PW + Min 905 905 271 30463 30463 5034 + Max 906 906 271 30465 30465 5035 + Sum 2717 2717 813 91393 91393 15103 + + Using Slab Decomposition + + + Check: negative core charge= -0.000036 + Reading collected, re-writing distributed wavefunctions + + ===================================================================== + + PERTURBED ATOM # 13 + + site n. atom mass positions (alat units) + 13 O 15.9994 tau(13) = ( 0.28043 0.00000 -0.87440 ) + + ===================================================================== + + The perturbed atom has a type which is not unique! + Changing the type of the perturbed atom and recomputing the symmetries... + The number of symmetries is reduced : + nsym = 4 nsym_PWscf = 8 + Changing the type of the perturbed atom back to its original type... + + + The grid of q-points ( 4, 4, 4) ( 18 q-points ) : + N xq(1) xq(2) xq(3) wq + 1 0.000000000 0.000000000 0.000000000 0.015625000 + 2 0.000000000 0.216131358 -0.211002628 0.062500000 + 3 0.000000000 -0.432262716 0.422005257 0.031250000 + 4 0.222873502 0.000000000 -0.211002628 0.062500000 + 5 0.222873502 0.216131358 -0.422005257 0.125000000 + 6 0.222873502 -0.432262716 0.211002628 0.125000000 + 7 0.222873502 -0.216131358 0.000000000 0.062500000 + 8 -0.445747005 0.000000000 0.422005257 0.031250000 + 9 -0.445747005 0.216131358 0.211002628 0.125000000 + 10 -0.445747005 -0.432262716 0.844010513 0.031250000 + 11 0.445747005 0.432262716 -0.422005257 0.031250000 + 12 0.445747005 0.000000000 0.000000000 0.031250000 + 13 -0.222873502 -0.216131358 0.844010513 0.062500000 + 14 -0.222873502 0.000000000 0.633007885 0.062500000 + 15 0.000000000 0.432262716 0.000000000 0.031250000 + 16 0.000000000 -0.216131358 0.633007885 0.062500000 + 17 0.000000000 0.000000000 0.422005257 0.031250000 + 18 -0.891494009 -0.864525432 0.844010513 0.015625000 + + PRINTING TIMING FROM PWSCF ROUTINES: + + + Called by init_run: + + Called by electrons: + v_of_rho : 0.17s CPU 0.21s WALL ( 2 calls) + v_h : 0.01s CPU 0.01s WALL ( 2 calls) + v_xc : 0.17s CPU 0.20s WALL ( 2 calls) + + Called by c_bands: + + Called by sum_band: + + Called by *egterg: + + Called by h_psi: + + General routines + fft : 0.09s CPU 0.16s WALL ( 22 calls) + davcio : 0.00s CPU 0.02s WALL ( 6 calls) + + Parallel routines + + Hubbard U routines + alloc_neigh : 0.33s CPU 0.34s WALL ( 2 calls) + + init_vloc : 0.37s CPU 0.37s WALL ( 2 calls) + init_us_1 : 0.02s CPU 0.03s WALL ( 2 calls) + + PRINTING TIMING FROM HP ROUTINES: + + + PRINTING TIMING FROM LR MODULE: + + + HP : 2.09s CPU 3.79s WALL + + + This run was terminated on: 13:54:32 5Jun2023 + +=------------------------------------------------------------------------------= + JOB DONE. +=------------------------------------------------------------------------------= diff --git a/tests/parsers/test_hp.py b/tests/parsers/test_hp.py index 14d174b..55648e6 100644 --- a/tests/parsers/test_hp.py +++ b/tests/parsers/test_hp.py @@ -204,6 +204,40 @@ def test_hp_initialization_only_mesh( }) +def test_hp_initialization_only_mesh_more_points( + aiida_localhost, generate_calc_job_node, generate_parser, generate_inputs_mesh_only, data_regression, tmpdir +): + """Test an initialization only `hp.x` calculation with intersites with more points.""" + name = 'initialization_only_mesh_more_points' + entry_point_calc_job = 'quantumespresso.hp' + entry_point_parser = 'quantumespresso.hp' + + # QE generates the ``HUBBARD.dat``, but with empty values, thus we make sure the parser + # does not recognize it as a final calculation and it does not crash as a consequence. + attributes = {'retrieve_temporary_list': ['HUBBARD.dat']} + node = generate_calc_job_node( + entry_point_calc_job, + aiida_localhost, + test_name=name, + inputs=generate_inputs_mesh_only, + attributes=attributes, + retrieve_temporary=(tmpdir, ['HUBBARD.dat']) + ) + parser = generate_parser(entry_point_parser) + results, calcfunction = parser.parse_from_node(node, store_provenance=False, retrieved_temporary_folder=tmpdir) + + assert calcfunction.is_finished, calcfunction.exception + assert calcfunction.is_finished_ok, calcfunction.exit_message + assert 'parameters' in results + assert 'hubbard' not in results + assert 'hubbard_chi' not in results + assert 'hubbard_matrices' not in results + assert 'hubbard_structure' not in results + data_regression.check({ + 'parameters': results['parameters'].get_dict(), + }) + + def test_hp_failed_invalid_namelist(aiida_localhost, generate_calc_job_node, generate_parser, generate_inputs_default): """Test an `hp.x` calculation that fails because of an invalid namelist.""" name = 'failed_invalid_namelist' @@ -225,6 +259,8 @@ def test_hp_failed_invalid_namelist(aiida_localhost, generate_calc_job_node, gen ('failed_out_of_walltime', HpCalculation.exit_codes.ERROR_OUT_OF_WALLTIME.status), ('failed_stdout_incomplete', HpCalculation.exit_codes.ERROR_OUTPUT_STDOUT_INCOMPLETE.status), ('failed_computing_cholesky', HpCalculation.exit_codes.ERROR_COMPUTING_CHOLESKY.status), + ('failed_missing_chi_matrices', HpCalculation.exit_codes.ERROR_MISSING_CHI_MATRICES.status), + ('failed_incompatible_fft_grid', HpCalculation.exit_codes.ERROR_INCOMPATIBLE_FFT_GRID.status), )) def test_failed_calculation( generate_calc_job_node, diff --git a/tests/parsers/test_hp/test_hp_initialization_only_mesh_more_points.yml b/tests/parsers/test_hp/test_hp_initialization_only_mesh_more_points.yml new file mode 100644 index 0000000..0194b68 --- /dev/null +++ b/tests/parsers/test_hp/test_hp_initialization_only_mesh_more_points.yml @@ -0,0 +1,4 @@ +parameters: + hubbard_sites: + '13': O + number_of_qpoints: 18 diff --git a/tests/utils/test_general.py b/tests/utils/test_general.py index 031947f..c7e3567 100644 --- a/tests/utils/test_general.py +++ b/tests/utils/test_general.py @@ -26,5 +26,8 @@ def test_is_perturb_only_atom(): parameters = {'perturb_only_atom(1)': True} assert is_perturb_only_atom(parameters) == 1 + parameters = {'perturb_only_atom(20)': True} + assert is_perturb_only_atom(parameters) == 20 + parameters = {'perturb_only_atom(1)': False} assert is_perturb_only_atom(parameters) is None diff --git a/tests/workflows/test_hubbard.py b/tests/workflows/test_hubbard.py index e235613..11b5152 100644 --- a/tests/workflows/test_hubbard.py +++ b/tests/workflows/test_hubbard.py @@ -379,8 +379,6 @@ def test_not_converged_check_convergence( process.setup() - # Mocking current (i.e. "old") and "new" HubbardStructureData, - # containing different Hubbard parameters process.ctx.current_hubbard_structure = generate_hubbard_structure() process.ctx.workchains_hp = [generate_hp_workchain_node(u_value=5.0)] @@ -404,19 +402,26 @@ def test_relabel_check_convergence( process.setup() - # Mocking current (i.e. "old") and "new" HubbardStructureData, - # containing different Hubbard parameters - process.ctx.current_hubbard_structure = generate_hubbard_structure() - process.ctx.workchains_hp = [generate_hp_workchain_node(relabel=True, u_value=100)] - + current_hubbard_structure = generate_hubbard_structure(u_value=1, only_u=True) + process.ctx.current_hubbard_structure = current_hubbard_structure + process.ctx.workchains_hp = [generate_hp_workchain_node(relabel=True, u_value=100, only_u=True)] process.check_convergence() assert not process.ctx.is_converged + assert process.ctx.current_hubbard_structure.get_kind_names() != current_hubbard_structure.get_kind_names() - process.ctx.current_hubbard_structure = generate_hubbard_structure(u_value=99.99) - process.ctx.workchains_hp = [generate_hp_workchain_node(relabel=True, u_value=100)] + current_hubbard_structure = generate_hubbard_structure(u_value=99.99, only_u=True) + process.ctx.current_hubbard_structure = current_hubbard_structure + process.ctx.workchains_hp = [generate_hp_workchain_node(relabel=True, u_value=100, only_u=True)] + process.check_convergence() + assert process.ctx.is_converged + assert process.ctx.current_hubbard_structure.get_kind_names() != current_hubbard_structure.get_kind_names() + current_hubbard_structure = generate_hubbard_structure(u_value=99.99) + process.ctx.current_hubbard_structure = current_hubbard_structure + process.ctx.workchains_hp = [generate_hp_workchain_node(relabel=True, u_value=100)] process.check_convergence() assert process.ctx.is_converged + assert process.ctx.current_hubbard_structure.get_kind_names() == current_hubbard_structure.get_kind_names() @pytest.mark.usefixtures('aiida_profile')