Abstract: In doctor-patient conversations, identifying medically relevant information is crucial, posing the need for conversation summarization. In this work, we propose the first deployable real-time speech summarization system for real-world applications in industry, which generates a local summary after every N speech utterances within a conversation and a global summary after the end of a conversation. Our system could enhance user experience from a business standpoint, while also reducing computational costs from a technical perspective. Secondly, we present VietMed-Sum which, to our knowledge, is the first speech summarization dataset for medical conversations. Thirdly, we are the first to utilize LLM and human annotators collaboratively to create gold standard and synthetic summaries for medical conversation summarization. Finally, we present baseline results of state-of-the-art models on VietMed-Sum. All code, data (English-translated and Vietnamese) and models are available online.
Contribution to the demo
- Monish Reddy
- Le Duc Khai
Citation: Please cite this paper: https://doi.org/10.21437/Interspeech.2024-2250
@inproceedings{leduc24_interspeech,
title = {Real-time Speech Summarization for Medical Conversations},
author = {Khai Le-Duc and Khai-Nguyen Nguyen and Long Vo-Dang and Truong-Son Hy},
year = {2024},
booktitle = {Interspeech 2024},
pages = {1960--1964},
doi = {10.21437/Interspeech.2024-2250},
}