The emergence of large language models (LLMs) has resulted in the production of LLM-generated texts that is highly sophisticated and almost indistinguishable from texts written by humans. However, this has also sparked concerns about the potential misuse of such texts, such as spreading misinformation and causing disruptions in the education system. Although many detection approaches have been proposed, a comprehensive understanding of the achievements and challenges is still lacking. This survey aims to provide an overview of existing LLM-generated text detection techniques and enhance the control and regulation of language generation models. Furthermore, we emphasize crucial considerations for future research, including the development of comprehensive evaluation metrics and the threat posed by open-source LLMs, to drive progress in the area of LLM-generated text detection.
Cite this work:
Ruixiang Tang, Yu-Neng Chuang, Xia Hu. "The Scicence of LLM-generated Text Detection." Rice University, 2023
Biblatex entry:
unpublished{
tang2023the,
title={The Science of LLM-Generated Text Detection},
author={Ruixiang Tang, Yu-Neng Chuang, Xia Hu},
journal={OpenReview Preprint},
year={2023},
note={preprint under review}
}