pytorch implementation of dftd2 [1] & dftd3 [2, 3]
# Install from pypi
pip install torch-dftd
# Install from source (for developers)
git clone https://github.com/pfnet-research/torch-dftd
pip install -e .
from ase.build import molecule
from torch_dftd.torch_dftd3_calculator import TorchDFTD3Calculator
atoms = molecule("CH3CH2OCH3")
# device="cuda:0" for fast GPU computation.
calc = TorchDFTD3Calculator(atoms=atoms, device="cpu", damping="bj")
energy = atoms.get_potential_energy()
forces = atoms.get_forces()
print(f"energy {energy} eV")
print(f"forces {forces}")
Recomputing the neighbor list at every step can be slow, particularly when the cutoff for the D3 correction is large. To address this, a naive implementation of a neighbor list with a skin layer is provided, featuring hyperparameters similar to those in LAMMPS. You can simply replace your calculator initialization with the following:
calc = TorchDFTD3Calculator(atoms=atoms, device="cpu", damping="bj",
every = 2, delay = 10, check = True, skin = 2.0) # skin-type nlist parameters
Here, the additional parameters every
, delay
, check
, and skin
allow reusing the neighbor list as specified in the LAMMPS documentation. The skin
parameter defines the buffer region around the cutoff distance and has the same unit as specified in the atoms
object, which is typically in angstroms.
The library is tested under following environment.
- python: 3.10
- CUDA: 12.2
torch==2.0.1
ase==3.22.1
# Below is only for 3-body term
cupy-cuda12x==12.2.0
pytorch-pfn-extras==0.7.3
pysen is used to format the python code of this repository.
You can simply run below to get your code formatted :)
# Format the code
$ pysen run format
# Check the code format
$ pysen run lint
cupy supports users to implement CUDA kernels within python code,
and it can be easily linked with pytorch tensor calculations.
Element wise kernel is implemented and used in some pytorch functions to accelerate speed with GPU.
See document for details about user defined kernel.
Please always cite original paper of DFT-D2 [1] or DFT-D3 [2, 3].
Also, please cite the paper [4] if you used this software for your publication.
DFT-D2:
[1] S. Grimme, J. Comput. Chem, 27 (2006), 1787-1799.
DOI: 10.1002/jcc.20495
DFT-D3:
[2] S. Grimme, J. Antony, S. Ehrlich and H. Krieg, J. Chem. Phys, 132 (2010), 154104.
DOI: 10.1063/1.3382344
If BJ-damping is used in DFT-D3:
[3] S. Grimme, S. Ehrlich and L. Goerigk, J. Comput. Chem, 32 (2011), 1456-1465.
DOI: 10.1002/jcc.21759
[4] PFP: Universal Neural Network Potential for Material Discovery
@misc{takamoto2021pfp,
title={PFP: Universal Neural Network Potential for Material Discovery},
author={So Takamoto and Chikashi Shinagawa and Daisuke Motoki and Kosuke Nakago and Wenwen Li and Iori Kurata and Taku Watanabe and Yoshihiro Yayama and Hiroki Iriguchi and Yusuke Asano and Tasuku Onodera and Takafumi Ishii and Takao Kudo and Hideki Ono and Ryohto Sawada and Ryuichiro Ishitani and Marc Ong and Taiki Yamaguchi and Toshiki Kataoka and Akihide Hayashi and Takeshi Ibuka},
year={2021},
eprint={2106.14583},
archivePrefix={arXiv},
primaryClass={cond-mat.mtrl-sci}
}