This repository aims to investigate if human collaboration can improve the performance of detecting LLM-generated deepfake texts, which includes the data and codes of MTurk human studies.
The work is to support the AAAI HCOMP 2023 paper "Does Human Collaboration Enhance the Accuracy of Identifying LLM-Generated Deepfake Texts?" by Adaku Uchendu*, Jooyoung Lee∗, Hua Shen∗, Thai Le, Ting-Hao ‘Kenneth’ Huang, Dongwon Lee.
If you find this repo helpful to your research, please cite the paper:
@inproceedings{uchendu2023understanding,
title={Does Human Collaboration Enhance the Accuracy of Identifying LLM-Generated Deepfake Texts?},
author={Uchendu, Adaku and Lee, Jooyoung and Shen, Hua and Le, Thai and Huang, Ting-Hao'Kenneth' and Lee, Dongwon},
booktitle={Proceedings of the AAAI Conference on Human Computation and Crowdsourcing},
year={2023},
}
Add a link to download only datasets (3 paragraph CSV file)?
You can check and download the dataset used in this study.
The dataset includes 50 instances (articles), in which each instance (article) cnosists of three paragraphs. Within the three paragraphs, one random-selected paragraph is generated by LLM and the other two are written by humans.
You can also download all data in this repo as a zip file, which include the aforementioned dataset and the codes to conduct human study with Amazon MTurk.
root_dir = 'the project path'
group1_template = os.path.join(root_dir, generate_hit_htmls, "group1_template.html")
group2_template = os.path.join(root_dir, generate_hit_htmls, "group2_template.html")
cd root_dir/generate_hit_htmls
$ python generate_hits.py
All generated HIT htmls are saved in "root_dir/generate_hit_htmls/htmls/group1
cd root_dir/deploy_mturk/product
To create HITs:
$ python create_hit_product.py
To retrieve HITs:
$ python retrieve_hit_product.py