Skip to content

Commit

Permalink
Merge pull request #47 from alchem0x2A/master
Browse files Browse the repository at this point in the history
Compatibility with ase 3.23
  • Loading branch information
alchem0x2A authored Sep 5, 2024
2 parents c419ce2 + 6e53fb3 commit 9136ce8
Show file tree
Hide file tree
Showing 27 changed files with 591 additions and 311 deletions.
2 changes: 1 addition & 1 deletion .conda/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ build:
script: "{{ PYTHON }} -m pip install . --no-deps -vv && cd .. && {{ PYTHON }} -m sparc.download_data"
entry_points:
- "sparc-ase = sparc.cli:main"


requirements:
host:
Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Provide a minimal list of settings / codes to help us debug, such as
- Version (git commit hash) of `SPARC-X-API`
- `SPARC` C-code version (see the SPARC .out file header)
- Your platform and architecture

**Expected behavior**
What is the code intended to achieve?

Expand Down
74 changes: 69 additions & 5 deletions .github/workflows/installation_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,12 @@ jobs:
python -m sparc.download_data
- name: Download SPARC output files to SPARC-master
run: |
wget https://github.com/SPARC-X/SPARC/archive/refs/heads/master.zip
unzip master.zip
# Pin the current version of SPARC to versions before MLFF
# wget https://github.com/SPARC-X/SPARC/archive/refs/heads/master.zip
# unzip master.zip
wget -O SPARC-master.zip https://codeload.github.com/SPARC-X/SPARC/zip/3371b4401e4ebca0921fb77a02587f578f3bf3f7
unzip SPARC-master.zip
mv SPARC-33* SPARC-master
- name: Test with pytest
run: |
# python -m pytest -svv tests/ --cov=sparc --cov-report=json --cov-report=html
Expand All @@ -53,7 +57,7 @@ jobs:
COVERAGE=`cat coverage.json | jq .totals.percent_covered | xargs printf '%.*f' 0`
echo "Current coverage is $COVERAGE"
echo "COVPERCENT=$COVERAGE" >> $GITHUB_ENV
- name: Lint with flake8
run: |
echo $CONDA_PREFIX
Expand Down Expand Up @@ -88,7 +92,7 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RID: ${{ github.run_id }}

test-socket:
defaults:
run:
Expand Down Expand Up @@ -139,7 +143,67 @@ jobs:
COVERAGE=`cat coverage.json | jq .totals.percent_covered | xargs printf '%.*f' 0`
echo "Current coverage is $COVERAGE"
echo "COVPERCENT=$COVERAGE" >> $GITHUB_ENV
- name: Lint with flake8
run: |
echo $CONDA_PREFIX
conda info
flake8 sparc/ --count --select=E9,F63,F7,F82 --show-source --statistics
flake8 sparc/ --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
# To be deleted once 1.0 release is done
build-linux-ase-3-22:
defaults:
run:
shell: bash -l {0}
runs-on: ubuntu-latest
strategy:
max-parallel: 5

steps:
- uses: actions/checkout@v3
- uses: conda-incubator/setup-miniconda@v2
with:
python-version: "3.10"
mamba-version: "*"
channels: conda-forge,alchem0x2a,defaults
channel-priority: true
activate-environment: sparc-api-test
- name: Install dependencies
run: |
# mamba install -c conda-forge ase>=3.22 pymatgen flake8 pytest
mamba install -c conda-forge make compilers openmpi fftw scalapack openblas
- name: Install package
run: |
pip install -e ".[test]" ase==3.22 numpy==1.24 scipy==1.10
# Manually downgrade
# Download the external psp data
python -m sparc.download_data
- name: Download SPARC output files to SPARC-master
run: |
# TODO: merge to master
wget -O SPARC-socket.zip https://codeload.github.com/alchem0x2A/SPARC/zip/refs/heads/socket
unzip SPARC-socket.zip
- name: Compile SPARC with socket
run: |
cd SPARC-socket/src
make clean
make -j2 USE_SOCKET=1 USE_MKL=0 USE_SCALAPACK=1 DEBUG_MODE=1
ls ../lib
- name: Test with pytest
run: |
ls ./SPARC-socket/lib/sparc
PWD=$(pwd)
export SPARC_TESTS_DIR="${PWD}/SPARC-socket/tests"
export ASE_SPARC_COMMAND="mpirun -n 1 ${PWD}/SPARC-socket/lib/sparc"
export SPARC_DOC_PATH="${PWD}/SPARC-socket/doc/.LaTeX"
coverage run -a -m pytest -svv tests/
coverage json --omit="tests/*.py"
coverage html --omit="tests/*.py"
COVERAGE=`cat coverage.json | jq .totals.percent_covered | xargs printf '%.*f' 0`
echo "Current coverage is $COVERAGE"
echo "COVPERCENT=$COVERAGE" >> $GITHUB_ENV
- name: Lint with flake8
run: |
echo $CONDA_PREFIX
Expand Down
1 change: 0 additions & 1 deletion .github/workflows/update_api.yml
Original file line number Diff line number Diff line change
Expand Up @@ -59,4 +59,3 @@ jobs:
gh pr create --base master --title "[PR Bot] New JSON API version ${API_VERSION}" --body "Merge new JSON API version ${API_VERSION} into master" -R ${{ github.repository_owner }}/SPARC-X-API
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

23 changes: 23 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Pre-commit hooks for SPARC-X-API
# Use pre-commit rn
exclude: "^tests/outputs/|^tests/psps/|^tests/sparc-latex-.*/|^tests/archive/|^sparc/sparc_json_api/"
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
- id: check-yaml
exclude: ".conda/meta.yaml"
- id: end-of-file-fixer
- id: trailing-whitespace

- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
name: isort (python)
args: ["--profile", "black"]

- repo: https://github.com/psf/black
rev: 22.10.0
hooks:
- id: black
30 changes: 15 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -322,7 +322,7 @@ The reasoning and details about unit conversion can be found in the [Rules for I


In order for `SPARC-X-API` to be compatible with other ASE-based DFT calculators,
there is a list of special parameters consistent with the ASE convention and uses Å / eV / GPa / fs
there is a list of special parameters consistent with the ASE convention and uses Å / eV / GPa / fs
unit system:

| parameter name | meaning | example | equivalent `SPARC` input |
Expand Down Expand Up @@ -394,7 +394,7 @@ bottleneck. The underlying software architecture is shown in [Fig. 3](#fig-3-spa
![scheme-sparc-socket](doc/img/scheme_socket_hetero.png)




**Requirements**: the SPARC binary must be manually compiled from the source
code with [socket
Expand Down Expand Up @@ -425,10 +425,10 @@ Based on the scenarios, the socket communication layer can be accessed via the f


1. **SPARC binary only** ([Fig. 5](#fig-5-different-ways-of-using-sparcs-socket-mode) **a**)

SPARC binary with socket support can be readily coupled with any i-PI compatible socker server, such as
`ase.calculators.socketio.SocketIOCalculator`, for example

```python
from ase.calculators.socketio import SocketIOCalculator
from subprocess import Popen
Expand All @@ -439,17 +439,17 @@ Based on the scenarios, the socket communication layer can be accessed via the f
# Single point calculations
process.kill()
```

The end user is responsible for generating the input files and
making sure the same atoms structures are used by
`SocketIOCalculator` and the SPARC binary. The mode is also limited
to be run on a single computer system.
to be run on a single computer system.


2. **Local-only Mode** ([Fig. 5](#fig-5-different-ways-of-using-sparcs-socket-mode) **b**)

Ideal for standalone calculations, this mode simulates a conventional calculator while benefiting from socket-based efficiency.

```python
with SPARC(use_socket=True, **normal_parameters) as calc:
# Execute single-point calculations
Expand All @@ -464,15 +464,15 @@ Based on the scenarios, the socket communication layer can be accessed via the f
message to a local SPARC binary and send results back through the
socket pipe. The server side can either be a normal i-PI compatible
server (such as `SocketIOCalculator`) or server-mode `sparc.SPARC` (see 4).

Start the client by:
```python
client = SPARC(use_socket=True,
socket_params=dict(host="host.address.com", port=31415))
with client:
client.run()
```

Or via Command-Line:
```bash
python -m sparc.client -s host:port
Expand All @@ -483,33 +483,33 @@ Based on the scenarios, the socket communication layer can be accessed via the f
new atoms positions and parameters arrive, the client will
automatically determine if it is necessary to restart the SPARC
subprocess.

4. **Server Mode** ([Fig. 5](#fig-5-different-ways-of-using-sparcs-socket-mode) **d**)

Paired with the client mode in (3), SPARC-X-API can be run as a
socket server, isolated from the node that performs the
computation. This can be useful for highly-distributed
computational workflows.

On the server node, run:
```python
server_calc = SPARC(use_socket=True, socket_params=dict(port=31415, server_only=True), **normal_parameters)
with server_calc:
# Execute single point calculations for atoms_1
# Execute single point calculations for atoms_2
```

In this case, the server will opens `0.0.0.0:31415` for
connection. Make sure your server is directly accessible from the
clients and the port is not occupied. The socker server is capable
of receiving `raw_results` directly from the clients, making it
possible to access `server_calc.raw_results` without access to the
file systems on the client side.


### (In-progress) Controlling SPARC routines from socket interface

As shown in [Fig. 4](#fig-4-overview-of-the-sparc-protocol-as-an-extension-to-the-standard-i-pi-protocol),
As shown in [Fig. 4](#fig-4-overview-of-the-sparc-protocol-as-an-extension-to-the-standard-i-pi-protocol),
the SPARC socket protocol designs allows bidirectional control of
internal SPARC routines. Local- or server-mode `sparc.SPARC`
calculators can communicate with the SPARC binary via functions like
Expand Down
30 changes: 15 additions & 15 deletions doc/advanced_topics.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ The design of `SPARC-X-API` is schematically shown in the following figure

### Behind the bundle file format

Instead of parsing individual `.ion` and `.inpt` files,
the bundle format (recognized by ASE by `format="sparc"`) will
Instead of parsing individual `.ion` and `.inpt` files,
the bundle format (recognized by ASE by `format="sparc"`) will
gather information from all files and check if atomic information
and calculation results can be retrieved.
The central piece for handling the bundle format is
and calculation results can be retrieved.
The central piece for handling the bundle format is
`sparc.io.SpardBundle` class. You can use it to parse an existing bundle

```python
Expand All @@ -28,15 +28,15 @@ bundle = SparcBundle("path/to/your-calc.sparc", mode="w")
bundle._write_ion_and_inpt(atoms, label="SPARC")
```

For each individual SPARC file (e.g. `.ion`, `.inpt`, `.static`, `.geopt`, `.aimd`),
file-specific parsers are in `sparc.sparc_parsers.<format>` files.
For each individual SPARC file (e.g. `.ion`, `.inpt`, `.static`, `.geopt`, `.aimd`),
file-specific parsers are in `sparc.sparc_parsers.<format>` files.
Each `_read_<format>` method will return the structured raw-data dictionary of the files.
Similarly, `_write_<format>` takes the structured dictionary as input and write the file
using only relevant data.

### Behind the JSON API

The JSON API are directly parsed from the `SPARC` documentation [LaTeX files](https://github.com/SPARC-X/SPARC/tree/master/doc/.LaTeX).
The JSON API are directly parsed from the `SPARC` documentation [LaTeX files](https://github.com/SPARC-X/SPARC/tree/master/doc/.LaTeX).
The JSON API file (`sparc/sparc_json_api/parameters.json`) distributed by `SPARC-X-API` is generated by:

```bash
Expand Down Expand Up @@ -77,9 +77,9 @@ sis = SparcAPI()

### Retriving parameters from old SPARC calculations

`sparc.SPARC` calculator supports the `restart` mode which will reconstruct all
parameters, psp files and atomic information from an existing SPARC calculation and
rerun them.
`sparc.SPARC` calculator supports the `restart` mode which will reconstruct all
parameters, psp files and atomic information from an existing SPARC calculation and
rerun them.

```python
from sparc import SPARC
Expand All @@ -96,14 +96,14 @@ old_atoms.get_potential_energy()

### Rules for input parameters in `sparc.SPARC` calculator

When constructing the `sparc.SPARC` calculator using the syntax
When constructing the `sparc.SPARC` calculator using the syntax
```python
calc = SPARC(directory="", **kwargs)
```
the parameters are handled in the following priority:
1) Parameters available to `.inpt` files (i.e. **CAPITALIZED**) have highest priority and overwrite all special inputs. They should be set directly using the atomic unit values (i.e the same value as they appear in the `.inpt` files).
2) Special inputs (i.e, `h`, `kpts`, `gpts`, `xc`, `convergence`) have second highest priority and overwrite default values. They are using the ASE unit system (i.e. Å, eV, GPa, fs).
3) If none of the parameters are provided, `SPARC` uses its default parameter set, currently
2) Special inputs (i.e, `h`, `kpts`, `gpts`, `xc`, `convergence`) have second highest priority and overwrite default values. They are using the ASE unit system (i.e. Å, eV, GPa, fs).
3) If none of the parameters are provided, `SPARC` uses its default parameter set, currently
```python
{"xc": "pbe", "h": 0.25, "kpts": (1, 1, 1)}
```
Expand All @@ -114,11 +114,11 @@ Additionally, boolean inputs (i.e. `PRINT_FORCES`) can be written in both intege

### Multiple occurance of output files

In a typical SPARC calculation, there may be multiple result files in the SPARC bundle, with different suffixes (e.g. `.out`, `.out_01`, `.out_02` etc.).
In a typical SPARC calculation, there may be multiple result files in the SPARC bundle, with different suffixes (e.g. `.out`, `.out_01`, `.out_02` etc.).
These files can be a result of restarted geometry optimization / AIMD or written by an ASE optimizer.

When using `read_sparc` to access the files, you can add `include_all_files=True` option to parse
trajectories from all files.
trajectories from all files.

```python
from sparc.io import read_sparc
Expand Down
4 changes: 2 additions & 2 deletions doc/changes_v0.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,14 @@ Nevertheless, reading calculation results generated by a v0.1 API code will not
```
which maps atoms 3, 2, 1, 0 from the SPARC .ion file order to atoms 0, 1, 2, 3 in ASE order. This is useful for systems that are constructed by ASE's `add_adsorbate` method.

3. v1.0 API accepts all SPARC internal parameters (i.e. **CAPITALIZED**) in *atomic units* for consistency reason.
3. v1.0 API accepts all SPARC internal parameters (i.e. **CAPITALIZED**) in *atomic units* for consistency reason.
However, we also keep a list of "special input params" that are conventionally used in other ASE calculators, that use Å / eV / GPa / fs unit system.

4. Defining `LATVEC`, `LATVEC_SCALE`, or `CELL` via the calculator parameters is no longer encouraged. Instead, all structure changes should be made to the `Atoms` object.

For more discussion please see [Advanced Topic] section.

Below are a list of v0.1 method of the `SPARC` calculator and their current status in v0.2 API.
Below are a list of v0.1 method of the `SPARC` calculator and their current status in v0.2 API.
`calc` is an instance of `sparc.SPARC`.

| old methods | status in v1.0 API | alternatives |
Expand Down
Loading

0 comments on commit 9136ce8

Please sign in to comment.