Skip to content

Latest commit

 

History

History
executable file
·
133 lines (93 loc) · 4.75 KB

README.md

File metadata and controls

executable file
·
133 lines (93 loc) · 4.75 KB

method

This repository is our approach to the challenge CVPR 2024: SEGMENT ANYTHING IN MEDICAL IMAGES ON LAPTOP. This challenge focuses on efficiently segmenting multi-modality medical image using lightweight bounding box-based segmentation model on CPU.

Evaluation:

Segmentation accuracy metrics:

  • Dice Similarity Coefficient (DSC)
  • Normalized Surface Distance (NSD)

Segmentation efficiency metrics:

  • Running time (s)

Hardware specs:

  • CPU: Intel® Xeon(R) W-2133 @ 3.60GHz RAM: 8G
  • Docker version 20.10.13

Inference:

https://drive.google.com/file/d/1XnrKAntAwZo3neEEMNBrKU0h4udoN64h/view

Usage

Sanity test

  • Download the LiteMedSAM, EfficientViT, EfficientSAM openvino models here and put it in work_dir/openvino_models. After unzipping, the directory structure should be following. The model files within them should be renamed as shown below.
work_dir/
├── openvino_models
│   ├── efficientsam
│   │   ├── decoder.bin
│   │   ├── decoder.xml
│   │   ├── encoder.bin
│   │   └── encoder.xml
│   ├── efficientvit
│   │   ├── decoder.bin
│   │   ├── decoder.xml
│   │   ├── encoder.bin
│   │   └── encoder.xml
│   ├── litemedsam
│   │   ├── decoder.bin
│   │   ├── decoder.xml
│   │   ├── encoder.bin
│   │   └── encoder.xml
  • Download the demo data here. The directory structure should be following.
test_demo/
├── gts
│   ├── *.npz
│   └── *.npz
├── imgs
│   ├── *.npz
│   └── *.npz
└── segs
    ├── *.npz
    └── *.npz

C++ Inference

  • Run the following command to compile the C++ code and run the inference on the data.
cd cpp
cmake -S . -B build -D CMAKE_BUILD_TYPE=Release
cmake --build build --verbose -j$(nproc)
cd build
./main ../../work_dir/openvino_models/litemedsam/encoder.xml ../../work_dir/openvino_models/litemedsam/decoder.xml ../../work_dir/openvino_models/efficientvit/encoder.xml ../../work_dir/openvino_models/efficientvit/decoder.xml ../../test_demo/imgs ../../test_demo/imgs ../../test_demo/imgs

Build Docker

docker build -f Dockerfile.cpp -t hawken50.fat .
slim build --target hawken50.fat --tag hawken50 --http-probe=false --include-workdir --mount $PWD/test_demo/test_input/:/workspace/inputs/ --mount $PWD/test_demo/segs/:/workspace/outputs/ --exec "sh predict.sh"
docker save hawken50 | gzip -c > hawken50.tar.gz

Note: don't forget the . in the end

Run the docker on the testing demo images

docker container run -m 8G --name hawken50 --rm -v $PWD/test_demo/imgs/:/workspace/inputs/ -v $PWD/test_demo/hawken50/:/workspace/outputs/ hawken50:latest /bin/bash -c "sh predict.sh"

Note: please run chmod -R 777 ./* if you run into Permission denied error.

Save docker

docker save hawken50 | gzip -c > hawken50.tar.gz

Compute Metrics

python evaluation/compute_metrics.py -s test_demo/hawken50 -g test_demo/gts -csv_dir ./metrics.csv

Export OpenVINO model

Step 1. Export ONNX model

Download the LiteMedSAM, EfficientViT, EfficientSAM checkpoints here and put it in work_dir/checkpoints. Change --checkpoint and --model-type argument to export from different checkpoints where vit_h, vit_l, vit_b, vit_t for LiteMedSAM and vitt, vits for EfficientSAM.

python onnx_decoder_exporter.py --checkpoint work_dir/checkpoints/lite_medsam.pth --output work_dir/onnx_models/lite_medsam_encoder.onnx --model-type vit_t --return-single-mask

Step 2. Export OpenVINO model

To export OpenVINO model, you need to install OpenVINO toolkit. Follow the instructions here to install OpenVINO toolkit.

Once you have the exported ONNX model from the previous step, you can convert it to OpenVINO model using the following command.

ovc lite_medsam_encoder.onnx

Acknowledgements

We thank the authors of LiteMedSAM and EfficientSAM for making their source code publicly available.