Skip to content

Latest commit

 

History

History
100 lines (74 loc) · 5.07 KB

README.md

File metadata and controls

100 lines (74 loc) · 5.07 KB

LLaMP 🦙🔮
arXiv Website Github Stars

Large Language Model Made Powerful for High-fidelity Materials Knowledge Retrieval and Distillation

Tip

TL;DR: LLaMP is a multimodal retrieval-augmented generation (RAG) framework of hierarchical ReAct agents that can dynamically and recursively interact with Materials Project to ground LLMs on high-fidelity materials informatics.

This repository accompanies our paper LLaMP: Large Language Model Made Powerful for High-fidelity Materials Knowledge Retrieval and Distillation. Our codebase is built upon LangChain and is designed to be modular and extensible, and can be used to reproduce the experiments in the paper, as well as to develop new experiments.

LLaMP is also a homonym of Large Language model Materials Project. 😉 It empowers LLMs with large-scale computational materials database to reduce the likelihood of hallucination for materials informatics.

🔮 Quick Start

Python API

git clone https://github.com/chiang-yuan/llamp.git
cd llamp/api
pip install -e .

After installation, check out colab notebook chat or the notebooks in experiments to start.

(Optional) Atomistic Simulation

You may need to install additional packages to support atomistic simulations:

pip install ase, atomate2, jobflow, mace-torch

(Optional) Docker Web Interface

docker-compose up --build

👋 Contributing

We understand sometime it is difficult to navigate Materials Project database! We want everyone to be able to access materials informatics through conversational AI. We are looking for contributors to help us build a more powerful and user-friendly LLaMP to support more MP API endpoints or external datastore and agents.

To contirbute to LLaMP, please follow these steps:

  1. Fork the repository
  2. Set up environment variables
    cp .env.example .env.local
  3. Deploy local development environment
    docker-compose up
  4. Make changes and submit a pull request

🌟 Authors and Citation

Alt

If you use LLaMP, our code and data in your research, please cite our paper:

@article{chiang2024llamp,
  title={LLaMP: Large Language Model Made Powerful for High-fidelity Materials Knowledge Retrieval and Distillation},
  author={Chiang, Yuan and Chou, Chia-Hong and Riebesell, Janosh},
  journal={arXiv preprint arXiv:2401.17244},
  year={2024}
}

🤗 Acknowledgements

We thank Matthew McDermott (@mattmcdermott), Jordan Burns in Materials Science and Engineering at UC Berkeley for their valuable feedback and suggestions. We also thank the Materials Project team for their support and for providing the data used in this work. We also thank Dr. Karlo Berket (@kbuma) and Dr. Anubhav Jain (@computron) for their advice and guidance.