Skip to content

Code for TIP 2024 paper: Sparse Coding Inspired LSTM and Self-Attention Integration for Medical Image Segmentation

License

Notifications You must be signed in to change notification settings

yeshunlong/SALSTM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sparse Coding Inspired LSTM and Self-Attention Integration for Medical Image Segmentation

News

  • Paper for IEEE Transactions on Image Processing source code is available. (Under Review) [2024.08.06]
  • Paper for IEEE Transactions on Image Processing is accepted. [2024.10.13]
  • Paper for IEEE Transactions on Image Processing is published🥳 (link). [2024.10.28]

Motivation in One Figure

Our motivation

Two motivations for the deep fusion of LSTM and SA: first, there is an intrinsic similarity in the computation process of LSTM states and the QKV matrices in SA; second, using LSTM can effectively provide enhanced representations of historical information for SA in the process of sparse coding.

Dataset

In the case of 2D input, we evaluate the performance of our proposed modules on the following datasets:

  • Synapse dataset: This dataset comprises multiple organ segmentation tasks. (link)
  • ISIC2018 dataset: This dataset focuses on the segmentation of skin lesions. (link)

For the 3D input scenario, we conduct experiments on the following datasets:

  • ACDC dataset: This dataset involves the segmentation task of the heart. (link)
  • CVC-ClinicDB dataset: This dataset pertains to polyp segmentation in colonoscopy videos and has been used for comparing automatic segmentation methods. (link)

Train and Evaluation on Sample Dataset and Baseline presented in the paper

We present the code for the ACDC dataset and the MT-UNet baseline.

You can run the following command to prepare the environment.

  • Step 1, prepare the dataset, you can download the ACDC dataset from the link and put the dataset in the ./data/ACDC folder.
  • Step 2, install the required packages by running pip install -r requirements.txt.
  • Step 3, download the pretrained model SALSTM from the link and put the model in the ./code/ACDC/Baseline/checkpoint folder.
  • Step 4, download the pretrained model LSTMSA from the link and put the model in the ./code/ACDC/Baseline/checkpoint folder.

After preparing the environment, you project folder should look like this:

.
├── code
│   └── ACDC
│       └── Baseline
│           ├── checkpoint
│           │   ├── best_model_LSTMSA.pth
│           │   └── best_model_SALSTM.pth
│           ├── dataset.py
│           ├── model.py # The proposed modules are integrated into the model.py file.
│           ├── train.py
│           └── ...
├── data
│   └── ACDC
│       ├── lists_ACDC
│       │   ├── train.txt
│       │   ├── valid.txt
│       │   └── test.txt
│       ├── train
│       │   ├── case_001_sliceED_0.npz
│       │   ├── case_001_sliceED_1.npz
│       │   └── ...
│       ├── valid
│       │   ├── case_019_sliceED_0.npz
│       │   ├── case_019_sliceED_1.npz
│       │   └── ...
│       └── test
│           ├── case_002_volume_ED.npz
│           ├── case_002_volume_ES.npz
│           └── ...

Next, you can run the following command to train and evaluate the model.

  • Step 5, [Optional] go to ./code/ACDC/Baseline and run nohup python -u train.py --model SALSTM > train_SALSTM.log 2>&1 & to train the model SALSTM. (You can also use the pretrained model to evaluate the model by adding the --checkpoint parameter directly.)
  • Step 6, [Optional] go to ./code/ACDC/Baseline and run nohup python -u train.py --model LSTMSA > train_LSTMSA.log 2>&1 & to train the model LSTMSA. (You can also use the pretrained model to evaluate the model by adding the --checkpoint parameter directly.)
  • Step 7, go to ./code/ACDC/Baseline and run nohup python -u train.py --model SALSTM --checkpoint "./checkpoint/best_model_SALSTM.pth" --max_epochs 1 > test_SALSTM.log 2>&1 & to evaluate the model SALSTM.
  • Step 8, go to ./code/ACDC/Baseline and run nohup python -u train.py --model LSTMSA --checkpoint "./checkpoint/best_model_LSTMSA.pth" --max_epochs 1 > test_LSTMSA.log 2>&1 & to evaluate the model LSTMSA.

To check whether the results are reproduced, we present the training and evaluation logs in the ./code/ACDC/Baseline folder, and you can check the results by opening the corresponding log file train_SALSTM.log and train_LSTMSA.log.

Train and Evaluation on Other Datasets and Baselines

You need to modify the corresponding code in the baseline to use our proposed modules. The open source code of baselines is listed as follows:

Baseline on Synapse and ACDC datasets

Baseline on ISIC2018 dataset

Baseline on CVC-ClinicDB dataset

About

Code for TIP 2024 paper: Sparse Coding Inspired LSTM and Self-Attention Integration for Medical Image Segmentation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages