Skip to content

Commit

Permalink
Merge pull request #24 from sunqm/master
Browse files Browse the repository at this point in the history
Add documents and exampels for: DMRG-NEVPT2, DMRG-CASSCF via PySCF
  • Loading branch information
shengg committed Feb 14, 2016
2 parents 708308a + f63dde7 commit 592205c
Show file tree
Hide file tree
Showing 10 changed files with 346 additions and 7 deletions.
27 changes: 27 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
Change Log
**********

## [1.1 alpha] - 2016-02-12

* New NEVPT2 implementation based on MPS perturber


## [1.0.1] - 2015-07-19

* fixed mpi runtime error; processors now write intermediates files into different directories under scratch;
* now printing summed cpu timings from all processors;
* fixed error of orbital reordering in restart_npdm;
* removed bugs in PT calculations;
* test reference data now cleaned and consistent.


## [1.0.0] - 2015-03-27

* warm-up step faster
* npdm up to fourth-order as well as transition pdm up to second-order


## [0.9.7] - 2013-02-17


## [0.9] - 2012-09-21
1 change: 1 addition & 0 deletions docs/source/CHANGELOG.rst
33 changes: 33 additions & 0 deletions docs/source/benchmark.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
.. _benchmark:


Benchmark
*********

========= ==============================
Platform
========= ==============================
CPU 4 Intel E5-2670 @ 2.6 GB
Memory 128 GB DDR3
OS Custom Redhat 6.6
BLAS MKL 11.0
Compiler GCC 4.8.2
========= ==============================


Computation cost
================

================================= ============= ======================== =============
Problem size Program CPU Memory
================================= ============= ======================== =============
439 AOs, CAS(16e,20o), M = 1000 DMRG-CASCI ~ 1 h (16 core) < 1 GB/core
\ DMRG-CASSCF ~ 1.5 h/iter (16 core) < 1 GB/core
\ NEVPT2 48 h (8 core) ~12 GB/core
\ MPS-NEVPT2 5.5 h (16 core) < 4 GB/core
439 AOs, CAS(22e,27o), M = 1000 DMRG-CASCI 6 h (16 core) < 2 GB/core
\ DMRG-CASSCF 9 h/iter (16 core) < 2 GB/core
\ MPS-NEVPT2 29 h (16 core) ~10 GB/core
760 AOs, CAS(30e,36o), M = 1500 DMRG-CASCI 24 h (16 core) < 2 GB/core
================================= ============= ======================== =============

50 changes: 50 additions & 0 deletions docs/source/build.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,55 @@ When the makefile is configured, run in the directory ``./Block``::

The successful compilation generates the executable ``block.spin_adapted``, static and shared DMRG libraries ``libqcdmrg.a`` and ``libqcdmrg.so``.


.. _pyscf-itrf:

Interface to PySCF package
--------------------------

The electronic structure Python module `PySCF <http://chemists.princeton.edu/chan/software/pyscf/>`_
provided an interface to run `BLOCK` code. If you would like to run
DMRG-SCF, DMRG-NEVPT2 etc with PySCF package, you need create a pyscf
config file ``/path/to/pyscf/future/dmrgscf/settings.py`` and add the
following settings in it::

BLOCKEXE = "/path/to/Block/block.spin_adapted"
BLOCKSCRATCHDIR = "/path/to/scratch"
MPIPREFIX = "mpirun"

Note the parameter ``MPIPREFIX`` should be adjusted according to your
job scheduler, eg::

# For OpenPBS/Torque
MPIPREFIX = "mpirun"
# For SLURM
MPIPREFIX = "srun"

If calculation is carried out on interactive node, eg with 4 processors,
the setting looks like::

MPIPREFIX = "mpirun -n 4"

If ``BLOCK`` and ``PySCF`` are installed successfully, a simple DMRG-SCF
calculation can be input in Python interpereter::

>>> from pyscf import gto, scf, dmrgscf
>>> mf = gto.M(atom='C 0 0 0; C 0 0 1', basis='ccpvdz').apply(scf.RHF).run()
>>> mc = dmrgscf.dmrgci.DMRGSCF(mf, 6, 6)
>>> mc.run()

DMRG-NEVPT2 calculation can be applied::

>>> from pyscf import mrpt
>>> mrpt.nevpt2.sc_nevpt(mc)

Optionally, if `MPI4Py <http://mpi4py.scipy.org>`_ was installed, the efficient
DMRG-NEVPT2 implementation can be used, eg::

>>> from pyscf import mrpt
>>> mrpt.nevpt2.sc_nevpt(dmrgscf.compress_perturb(mc))


How to run `BLOCK`
==================

Expand All @@ -70,3 +119,4 @@ Testjobs

The tests require Python to be installed on the system.


4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,9 +61,9 @@
# built documents.
#
# The short X.Y version.
version = '1.0.0'
version = '1.1'
# The full version, including alpha/beta/rc tags.
release = '1.0.0'
release = '1.1.0'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down
221 changes: 221 additions & 0 deletions docs/source/examples-with-pyscf.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,221 @@
DMRG for Electronic Structure Calculations
******************************************

Using PySCF
===========

`PySCF <http://chemists.princeton.edu/chan/software/pyscf/>`_ package provided
an interface to call ``BLOCK`` code for DMRG-CASSCF and DMRG-NEVPT2 calculations.
See the :ref:`installation <pyscf-itrf>` section to setup the PySCF/BLOCK interface.
In the section, we will demonstrate how to use ``BLOCK`` and ``PySCF`` packages
to study static and dynamic correlations with DMRG-CASSCF/DMRG-CASCI and
DMRG-MRPT solvers for large active space problems.

DMRG-CASSCF
-----------

We start from a simple example::

$ cat example1.py
from pyscf import gto, scf, dmrgscf
mf = gto.M(atom="C 0 0 0; C 0 0 1", basis="ccpvdz").apply(scf.RHF).run()
mc = dmrgscf.dmrgci.DMRGSCF(mf, 6, 6)
mc.run()

Executing this script in command line::

$ python example1.py

will start BLOCK program with 4 processors, assuming that you have the
configuration ``dmrgscf.settings.MPIPREFIX = "mpirun -n 4"``.
The number of parallel processors can be dynamically adjusted using
``sys.argv``, eg::

$ cat example2.py
import sys
from pyscf import gto, scf, dmrgscf
dmrgscf.dmrgci.settings.MPIPREFIX = "mpirun -n %s" % sys.argv[1]
mf = gto.M(atom="C 0 0 0; C 0 0 1", basis="ccpvdz").apply(scf.RHF).run()
mc = dmrgscf.dmrgci.DMRGSCF(mf, 6, 6)
mc.run()

$ python example2.py 4

In the above examples, ``gto``, ``scf`` are the standard modules provided by
PySCF package. For the use of PySCF package, we refer the reader to the
`PySCF documentation <http://www.pyscf.org>`_. ``dmrgscf`` module is the code
where we put Block interface. It is designed to control all Block input
parameters, access the results from Block, including but not limiting to
regular DMRG calculation, N-particle density matrices (up to 4-PDM) and
transition density matrices (up to 2-PDM), DMRG-NEVPT2 calculations.

The standard way to start a DMRG-CASSCF calculation needs to modify the
``fcisolver`` attribute of CASSCF or CASCI object::

from pyscf import gto, scf, mcscf, dmrgscf
mol = gto.M(atom="C 0 0 0; C 0 0 1", basis="ccpvdz")
mf = scf.RHF(mol)
mf.run()
norb = 6
nelec = 6
mc = mcscf.CASSCF(mf, norb, nelec)
dmrgsolver = dmrgscf.dmrgci.DMRGCI(mol)
dmrgsolver.maxM = 50
dmrgsolver.maxIter = 10
mc.fcisolver = dmrgscfsolver
mc.run()

mc = mcscf.CASCI(mf, norb, nelec)
mc.fcisolver = dmrgscf.dmrgci.DMRGCI(mol)
mc.run()

``dmrgsolver = dmrgscf.dmrgci.DMRGCI(mol)`` created an object ``dmrgsolver`` to
hold Block input parameters and runtime environments. By default,
``maxM=1000`` is applied. One can control the DMRG calculation by changing
the settings of ``dmrgsolver`` object, eg to set the sweep schedule::

dmrgsolver.scheduleSweeps = [0, 4, 8, 12, 16, 20, 24, 28, 30, 34]
dmrgsolver.scheduleMaxMs = [200, 400, 800, 1200, 2000, 4000, 3000, 2000, 1000, 500]
dmrgsolver.scheduleTols = [0.0001, 0.0001, 0.0001, 0.0001, 1e-5, 1e-6, 1e-7, 1e-7, 1e-7, 1e-7]
dmrgsolver.scheduleNoises = [0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0, 0.0, 0.0, 0.0]
dmrgsolver.twodot_to_onedot = 38

For more details of the default settings and control parameters, we refer to
the `PySCF source code <https://github.com/sunqm/pyscf/blob/master/future/dmrgscf/dmrgci.py>`_
and the corresponding Block code :ref:`keywords_list` list.

To make the embedded DMRG solver work more efficiently in CASSCF iteration, one
need carefully tune the DMRG runtime parameters. It means more input arguments
and options to be specified for ``dmrgsolver`` object. To simplify the input,
we provided a shortcut function ``DMRGSCF`` in the ``dmrgscf`` module, as shown
by the first example ``example1.py``. In the ``DMRGSCF`` function, we created
a CASSCF object, and assigned ``dmrgsolver`` to its ``fcisolver``, then hooked
a function which is used to dynamically adjust sweep scheduler to CASSCF object.
Now, the DMRG-CASSCF calculation can be executed in one line of input::

mc = dmrgscf.dmrgci.DMRGSCF(mf, norb, nelec).run()

The DMRG-CASSCF results such as orbital coefficients, natural occupancy etc.
will be held in the ``mc`` object. The DMRG wave function will be stored on
disk, more precisely, in the directory specified by ``mc.fcisolver.scratchDirectory``.
Apparently, we can modify its value to change the place to save the DMRG
wave function. The default directory is read from the dmrgscf configuration
parameter ``dmrgscf.settings.BLOCKSCRATCHDIR``.

.. note:: Be sure the ``mc.fcisolver.scratchDirectory`` is properly assigned.
Since all DMRGCI object by default uses the same ``BLOCKSCRATCHDIR`` settings,
it's easy to cause name conflicts on the scratch directory, especially when
two DMRG-CASSCF calculations are executed on the same node.

.. note:: Usually, the DMRG wave function is very large. Be sure that the
disk which ``BLOCKSCRATCHDIR`` pointed to has enough space.

Due to the complexity of multi-configuration model, it's common that we need
interrupt the CASSCF calculation and restart the calculation with modified
parameters. To restart the CASSCF calculation, we need the information such as
orbital coefficients and active space CI wave function from last simulation.
Although the orbital coefficients can be save/load through
`PySCF chkfile module <https://github.com/sunqm/pyscf/blob/master/mcscf/chkfile.py>`_, the CI wave
function are not saved by PySCF. Unlike the regular Full CI based CASSCF
calculation in which the Full CI wave function can be fast rebuilt by a fresh
running, the restart feature of DMRG-CASSCF calculation relies on the wave
function indicated by the ``mc.fcisolver.scratchDirectory`` attribute and the
``restart`` flag of DMRG solver::

mc = dmrgscf.dmrgci.DMRGSCF(mol)
mc.fcisolver.scratchDirectory = "/path/to/last/dmrg/scratch"
mc.fcisolver.restart = True
mc.run()

.. note:: A mismatched DMRG wave function (from wrong
``mc.fcisolver.scratchDirectory``) may cause DMRG-CASSCF crash.

Other common features like state-average DMRG-CASSCF or state-specific for
excited state can be easily called with the ``DMRGSCF`` wrapper function::

from pyscf import gto, scf, mcscf, dmrgscf
mol = gto.M(atom="C 0 0 0; C 0 0 1", basis="ccpvdz")
mf = scf.RHF(mol)
mf.run()
mc = dmrgscf.dmrgci.DMRGSCF(mf, 6, 6)
# half-half average over ground state and first excited state
mc.state_average_([0.5, 0.5])
mc.run()

# Optimize the first excited state
mc.state_specific_(state=1)
mc.run()

More information of their usage can be found in PySCF examples
`10-state_average.py <https://github.com/sunqm/pyscf/blob/master/examples/dmrg/10-state_average.py>`_
and
`11-excited_states.py <https://github.com/sunqm/pyscf/blob/master/examples/dmrg/11-excited_states.py>`_.


DMRG-NEVPT2
-----------

DMRG-NEVPT2 calculation is straightforward if the DMRG-CASCI or DMRG-CASSCF are
finished::

from pyscf import gto, scf, dmrgscf, mrpt
mol = gto.M(atom="C 0 0 0; C 0 0 1", basis="ccpvdz")
mf = scf.RHF(mol).run()

mc = dmrgscf.dmrgci.DMRGSCF(mf, 6, 6).run()
mrpt.nevpt2.sc_nevpt(mc)

mc = mcscf.CASCI(mf, 6, 6)
mc.fcisolver = dmrgscf.dmrgci.DMRGCI(mol)
mc.run()
mrpt.nevpt2.sc_nevpt(mc)

However, the default DMRG-NEVPT2 calculation is extremely demanding on both CPU
and memory resources. In Block code, there is an effective approximation
implemented based on compressed MPS perturber which can significantly
reduce the computation cost::

from pyscf import gto, scf, dmrgscf, mrpt
mol = gto.M(atom="C 0 0 0; C 0 0 1", basis="ccpvdz")
mf = scf.RHF(mol).run()
mc = dmrgscf.dmrgci.DMRGSCF(mf, 6, 6).run()

mrpt.nevpt2.sc_nevpt(dmrgscf.compress_perturb(mc))

The efficient NEVPT2 needs be initialized with ``compress_perturb`` function,
in which the most demanding intermediates are precomputed and stored on disk.

.. note:: The efficient NEVPT2 algorithm is also very demanding, especially on
the memory usage. Please refer to the :ref:`benchmark` for approximate cost.

If the excitation energy is of interest, we can use DMRG-NEVPT2 to compute the
energy of excited state. Note only the state-specific NEVPT2 calculation is
available in the current Block version::

mc = mcscf.CASCI(mf, 6, 6)
mc.fcisolver = dmrgscf.dmrgci.DMRGCI(mol)
mc.fcisolver.nroots = 2
mc.kernel()
mps_nevpt_e1 = mrpt.nevpt2.sc_nevpt(dmrgscf.compress_perturb(mc, maxM=100, root=0))
mps_nevpt_e2 = mrpt.nevpt2.sc_nevpt(dmrgscf.compress_perturb(mc, maxM=100, root=1))

In the above example, two NEVPT2 calculations are carried out separately for
two states which are indicated by the argument ``root=*``.

For DMRG-CASSCF and DMRG-NEVPT2 calculations, there are more examples available
in `PySCF source code <https://github.com/sunqm/pyscf/tree/master/examples/dmrg>`_.


Using Molpro
============

The examples of Block installation and DMRG-SCF calculation can be found in
`Molpro online manual <https://www.molpro.net/info/2015.1/doc/manual/node385.html>`_.


.. ORCA
.. ====
.. DMRG calculation within ORCA can be found in
.. https://sites.google.com/site/orcainputlibrary/cas-calculations/dmrg .
8 changes: 5 additions & 3 deletions docs/source/examples.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
Typical Calculations
********************
Typical Calculations with `BLOCK`
*********************************

In the following the DMRG calculation for C\ :sub:`2` molecule is used to demonstrate various computational features as of the current 1.0.0 release.
Integrals and orbitals must be supplied externally in Molpro's ``FCIDUMP`` format, as ``BLOCK`` does not generate its own integrals.
Integrals and orbitals must be supplied externally in
`Molpro's FCIDUMP format <http://www.molpro.net/info/2010.1/doc/manual/node417.html>`_,
as ``BLOCK`` does not generate its own integrals.

The associated integral files for C\ :sub:`2` can be found here: `FCIDUMP <https://raw.githubusercontent.com/sanshar/Block/master/README_Examples/FCIDUMP>`_
for its D\ :sub:`2h` point-group symmetry.
Expand Down
3 changes: 3 additions & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,10 @@ Contents
overview.rst
build.rst
examples.rst
examples-with-pyscf.rst
keywords.rst
benchmark.rst
CHANGELOG.rst

.. Indices and tables
.. ==================
Expand Down
2 changes: 2 additions & 0 deletions docs/source/keywords.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _keywords_list:

Keywords
********

Expand Down
Loading

0 comments on commit 592205c

Please sign in to comment.