-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
9e258e4
commit cc40863
Showing
1 changed file
with
22 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
--- | ||
title: FAQ Rewrites | ||
summary: "Enhancing Editorial Tasks: A Case Study on Rewriting Customer Help Page Contents Using Large Language Models | ||
We introduce a German-language dataset comprising Frequently Asked Question-Answer pairs: raw FAQ drafts, their revisions by professional editors and LLM generated revisions. The data was used to investigate the use of large language models (LLMs) to enhance the editorial process of rewriting customer help pages. | ||
The input data was provided by Deutsche Telekom AG (DT), a large German telecommunications company. The corpus comprises 56 question-answer pairs addressing potential customer inquiries across various topics, including additional SIM cards, Netflix subscriptions, relocation, changing mobile service providers, house connection orders, hardware order and delivery status, and fixed-line internet and TV setup. For each FAQ pair, a raw input is provided by specialized departments, and a rewritten gold output is crafted by a professional editor of DT. The final dataset also includes LLM generated FAQ-pairs. | ||
On this dataset, we evaluate the performance of four large language models (LLM) through diverse prompts tailored for the rewriting task. We conduct automatic evaluations of content and text quality using ROUGE, BERTScore, and ChatGPT. Furthermore, we let professional editors assess the helpfulness of automatically generated FAQ revisions for editorial enhancement. Our findings indicate that LLMs can produce FAQ reformulations beneficial to the editorial process. We observe minimal performance discrepancies among LLMs for this task, and our survey on helpfulness underscores the subjective nature of editors' perspectives on editorial refinement. | ||
For detailed results, please see our [paper](https://aclanthology.org/2024.inlg-main.13/) accepted at INLG 20204, Tokyo, Japan. You can find the Github repo containing the dataset here [https://github.com/DFKI-NLP/faq-rewrites-llms](https://github.com/DFKI-NLP/faq-rewrites-llms)." | ||
" | ||
date: 2024-09-18T00:00:00+00:00 | ||
# Optional external URL for project (replaces project detail page). | ||
external_link: https://github.com/DFKI-NLP/faq-rewrites-llms | ||
image: | ||
caption: | ||
focal_point: Center | ||
--- |