The year 2018 witnessed the inauguration of a special challenge for VQA in the medical domain under the name: the VQA-MED challenge [1], which was organized by the reputable ImageCLEF conference [2]. In 2019, the second installment of the VQA-MED challenge [3] is launched and its test dataset currently is available publicly [4].
- Leaderboard
- Overview of ImageCLEF 2018 Medical Domain Visual Question Answering Task [Paper]
- UMass at ImageCLEF Medical Visual Question Answering (Med-VQA) 2018 Task [Paper]
- NLM at ImageCLEF 2018 Visual Question Answering in the Medical Domain [Paper]
- Employing Inception-Resnet-v2 and BiLSTM for Medical Domain Visual Question Answering [Paper][code]
- JUST at VQA-Med: A VGG-Seq2Seq Model [Paper] [code]
- Deep Neural Networks and Decision Tree classifier for Visual Question Answering in the medical domain [Paper]
- Leaderboard
- VQA-Med: Overview of the Medical Visual Question Answering Task at ImageCLEF 2019 [paper]
- Ensemble of Streamlined Bilinear Visual Question Answering Models for the ImageCLEF 2019 Challenge in the Medical Domain[paper]
- Zhejiang University at ImageCLEF 2019 Visual Question Answering in the Medical Domain [paper]
- TUA1 at ImageCLEF 2019 VQA-Med: A classification and generation model based on transfer learning [paper]
- JUST at ImageCLEF 2019 Visual Question Answering in the Medical Domain [paper]
- An Encoder-Decoder model for visual question answering in the medical domain[paper]
- Medical Visual Question Answering at Image CLEF 2019- VQA Med [paper]
- Tlemcen University at ImageCLEF 2019 Visual Question Answering Task [paper]
- Leveraging Medical Visual Question Answering with Supporting Facts [paper]
- LSTM in VQA-Med, is it really needed? JCE study on the ImageCLEF 2019 dataset [paper] [code]
- An Xception-GRU Model for Visual Question Answering in the Medical Domain [paper]
- Deep Multimodal Learning for Medical Visual Question Answering [paper]
- MIT Manipal at ImageCLEF 2019 Visual Question Answering in Medical Domain [paper]