Official PyTorch implementation of the paper: "Change-Agent: Toward Interactive Comprehensive Remote Sensing Change Interpretation and Analysis" in [IEEE] (Accepted by IEEE TGRS 2024)
- 2024-06: The code is available.
- 2024-03: The paper is available.
- Download the LEVIR_MCI dataset: LEVIR-MCI (Available Now!).
- This dataset is an extension of our previously established LEVIR-CC dataset. It contains bi-temporal images as well as diverse change detection masks and descriptive sentences. It provides a crucial data foundation for exploring multi-task learning for change detection and change captioning.
The overview of the MCI model:
-
Environment Installation:
Step 1: Create a virtual environment named
Multi_change_env
and activate it.conda create -n Multi_change_env python=3.9 conda activate Multi_change_env
Step 2: Download or clone the repository.
git clone https://github.com/Chen-Yang-Liu/Change-Agent.git cd ./Change-Agent/Multi_change
Step 3: Install dependencies.
pip install -r requirements.txt
-
Download Dataset:
Link: LEVIR-MCI. The data structure of LEVIR-MCI is organized as follows:
├─/DATA_PATH_ROOT/Levir-MCI-dataset/ ├─LevirCCcaptions.json ├─images ├─train │ ├─A │ ├─B │ ├─label ├─val │ ├─A │ ├─B │ ├─label ├─test │ ├─A │ ├─B │ ├─label
where folder
A
contains pre-phase images, folderB
contains post-phase images, and folderlabel
contains the change detection masks. -
Extract text files for the descriptions of each image pair in LEVIR-MCI:
python preprocess_data.py
After that, you can find some generated files in
./data/LEVIR_MCI/
.
Make sure you performed the data preparation above. Then, start training as follows:
python train.py --train_goal 2 --data_folder /DATA_PATH_ROOT/Levir-MCI-dataset/images --savepath ./models_ckpt/
python test.py --data_folder /DATA_PATH_ROOT/Levir-MCI-dataset/images --checkpoint {checkpoint_PATH}
We recommend training the model 5 times to get an average score.
Run inference to get started as follows:
python predict.py --imgA_path {imgA_path} --imgB_path {imgA_path} --mask_save_path ./CDmask.png
You can modify --checkpoint
of Change_Perception.define_args()
in predict.py
. Then you can use your own model, of course, you also can download our pretrained model MCI_model.pth
here: [Hugging face]. After that, put it in ./models_ckpt/
.
-
Agent Installation:
cd ./Change-Agent/lagent-main pip install -e .[all]
-
Run Agent:
cd into the
Multi_change
folder:cd ./Change-Agent/Multi_change
(1) Run Agent Cli Demo:
# You need to install streamlit first # pip install streamlit python try_chat.py
(2) Run Agent Web Demo:
# You need to install streamlit first # pip install streamlit streamlit run react_web_demo.py
If you find this paper useful in your research, please consider citing:
@ARTICLE{Liu_Change_Agent,
author={Liu, Chenyang and Chen, Keyan and Zhang, Haotian and Qi, Zipeng and Zou, Zhengxia and Shi, Zhenwei},
journal={IEEE Transactions on Geoscience and Remote Sensing},
title={Change-Agent: Toward Interactive Comprehensive Remote Sensing Change Interpretation and Analysis},
year={2024},
volume={},
number={},
pages={1-1},
keywords={Remote sensing;Feature extraction;Semantics;Transformers;Roads;Earth;Task analysis;Interactive Change-Agent;change captioning;change detection;multi-task learning;large language model},
doi={10.1109/TGRS.2024.3425815}}
Thanks to the following repository:
This repo is distributed under MIT License. The code can be used for academic purposes only.
If you have any other questions❓, please contact us in time 👬