Skip to content

Latest commit

 

History

History
156 lines (108 loc) · 6.74 KB

README.md

File metadata and controls

156 lines (108 loc) · 6.74 KB

Open Catalyst Project models

CircleCI

ocp-models is the modeling codebase for the Open Catalyst Project.

It provides implementations of state-of-the-art ML algorithms for catalysis that take arbitrary chemical structures as input to predict energy / forces / positions:

Installation

The easiest way to install prerequisites is via conda.

After installing conda, run the following commands to create a new environment named ocp-models and install dependencies.

Pre-install step

Install conda-merge:

pip install conda-merge

If you're using system pip, then you may want to add the --user flag to avoid using sudo. Check that you can invoke conda-merge by running conda-merge -h.

GPU machines

Instructions are for PyTorch 1.7.1, CUDA 11.0 specifically.

First, check that CUDA is in your PATH and LD_LIBRARY_PATH, e.g.

$ echo $PATH | tr ':' '\n' | grep cuda
/public/apps/cuda/11.0/bin

$ echo $LD_LIBRARY_PATH | tr ':' '\n' | grep cuda
/public/apps/cuda/11.0/lib64

The exact paths may differ on your system.

Then install the dependencies:

conda-merge env.common.yml env.gpu.yml > env.yml
conda env create -f env.yml

Activate the conda environment with conda activate ocp-models.

Install this package with pip install -e ..

Finally, install the pre-commit hooks:

pre-commit install

CPU-only machines

Please skip the following if you completed the with-GPU installation from above.

conda-merge env.common.yml env.cpu.yml > env.yml
conda env create -f env.yml
conda activate ocp-models
pip install -e .
pre-commit install

Download data

Dataset download links for all tasks can be found at DATASET.md.

IS2* datasets are stored as LMDB files and are ready to be used upon download. S2EF train+val datasets require an additional preprocessing step.

For convenience, a self-contained script can be found here to download, preprocess, and organize the data directories to be readily usable by the existing configs.

For IS2*, run the script as:

python scripts/download_data.py --task is2re

For S2EF train/val, run the script as:

python scripts/download_data.py --task s2ef --split SPLIT_SIZE --get-edges --num-workers WORKERS --ref-energy
  • --split: split size to download: "200k", "2M", "20M", "all", "val_id", "val_ood_ads", "val_ood_cat", or "val_ood_both".
  • --get-edges: includes edge information in LMDBs (~10x storage requirement, ~3-5x slowdown), otherwise, compute edges on the fly (larger GPU memory requirement).
  • --num-workers: number of workers to parallelize preprocessing across.
  • --ref-energy: uses referenced energies instead of raw energies.

For S2EF test, run the script as:

python scripts/download_data.py --task s2ef --split test

Train and evaluate models

A detailed description of how to train and evaluate models, run ML-based relaxations, and generate EvalAI submission files can be found here.

Our evaluation server is hosted on EvalAI. Numbers (in papers, etc.) should be reported from the evaluation server.

Pretrained models

Pretrained model weights accompanying our paper are available here.

Tutorials

Interactive tutorial notebooks can be found here to help familirize oneself with various components of the repo:

Discussion

For all non-codebase related questions and to keep up-to-date with the latest OCP announcements, please join the discussion board. All codebase related questions and issues should be posted directly on our issues page.

Acknowledgements

Citation

If you use this codebase in your work, consider citing:

@misc{ocp_dataset,
    title         = {The Open Catalyst 2020 (OC20) Dataset and Community Challenges},
    author        = {Lowik Chanussot* and Abhishek Das* and Siddharth Goyal* and Thibaut Lavril* and Muhammed Shuaibi* and Morgane Riviere and Kevin Tran and Javier Heras-Domingo and Caleb Ho and Weihua Hu and Aini Palizhati and Anuroop Sriram and Brandon Wood and Junwoong Yoon and Devi Parikh and C. Lawrence Zitnick and Zachary Ulissi},
    year          = {2020},
    eprint        = {2010.09990},
    archivePrefix = {arXiv}
}

License

MIT