Skip to content

Latest commit

 

History

History
45 lines (30 loc) · 1.49 KB

README.md

File metadata and controls

45 lines (30 loc) · 1.49 KB

Adverse Weather Multi-Modality Image Fusion via Global and Local Text Perception

Network Architecture

Contents

Testing

Testing( An example of Haze.)

python test.py \
 --ir_path ./Test_imgs/Haze/ir \
 --vi_path ./Test_imgs/Haze/vi \
 --weights_path ./checkpoint/AWM_Fuse.pth \
 --save_path ./result/Haze  \
 --input_text ./Test_imgs/Haze/Haze_captions \
 --blip_vi_text ./Test_imgs/Haze/vi_npy \
 --blip_ir_text ./Test_imgs/Haze/ir_npy

Table

Table1. Comparison of quantitative results of different methods in ideal and adverse weather scenes. The best scores are in bold, while the second-best scores are in blue.

Table

Gallery Fig1. Comparison of image fusion results of different methods in ideal and rain scenes. The ”Difference” represents the difference map between AWM-Fuse (GT) and AWM-Fuse (Rain).

Gallery Fig2. Comparison of image fusion results of different methods in ideal and haze scenes.The ”Difference” represents the difference map between AWM-Fuse (GT) and AWM-Fuse (Haze).

Gallery Fig3. Comparison of image fusion results of different methods in ideal and snow scenes. The ”Difference” represents the difference map between AWM-Fuse (GT) and AWM-Fuse (Snow).