EndoSRR: a comprehensive multi-stage approach for endoscopic specular reflection removal
EndoSRR_Pre_d1d2_d8d9.mp4
This code was implemented with Python 3.8.16 and Pytorch 1.13.0+cu116.You can install all the requirements via:
pip install -r requirements.txt
Download the vit-b pretrained model of SAM and place it in the pretrained folder.
After configuring the yaml file, run the following command to fine-tune the SAM-Adapter.
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch train.py --config configs/cod-sam-vit-b.yaml
Download the Big-LaMa pretrained model of LaMa and place it in the pretrained folder.
EndoSRR_optimization.mp4
Specular reflection removal using the EndoSRR pre-trained model.
CUDA_VISIBLE_DEVICES=0 python EndoSRR.py
--config configs/cod-sam-vit-b.yaml
--lama_config lama/configs/prediction/default.yaml
--lama_ckpt /pretrained/big-lama/
--model /pretrained/SAM_Adapter/model_epoch_best.pth
--input_path /image
--save_mask_path 'EndoSRR/mask'
--save_inpaint_path 'EndoSRR/inpaint_15'
--final_mask_path 'EndoSRR/final_mask'
--final_inpaint_path 'EndoSRR/final_inpaint'
--dilate_kernel_size 15
Process for creating endoscopic specular reflection weakly labeled dataset.
The whole Reflection Dataset is released.
Our code is based on SAM-Adapter and LaMa.