Code for our paper, "BADGE: BADminton report Generation and Evaluation with LLM," presented at the IJCAI 2024 Workshop IT4PSS.
Badminton enjoys widespread popularity, and reports on matches generally include details such as player names, game scores, and ball types, providing audiences with a comprehensive view of the games. However, writing these reports can be a time-consuming task. This challenge led us to explore whether a Large Language Model (LLM) could automate the generation and evaluation of badminton reports. We introduce a novel framework named BADGE, designed for this purpose using LLM. Our method consists of two main phases: Report Generation and Report Evaluation. Initially, badminton-related data is processed by the LLM, which then generates a detailed report of the match. We tested different Input Data Types, In-Context Learning (ICL), and LLM, finding that GPT-4 performs best when using CSV data type and the Chain of Thought prompting. Following report generation, the LLM evaluates and scores the reports to assess their quality. Our comparisons between the scores evaluated by GPT-4 and human judges show a tendency to prefer GPT-4 generated reports. Since the application of LLM in badminton reporting remains largely unexplored, our research serves as a foundational step for future advancements in this area. Moreover, our method can be extended to other sports games, thereby enhancing sports promotion. For more details, please refer to this https URL.
-
Clone or download this repo
git clone https://github.com/AndyChiangSH/BADGE.git
-
Move into this repo
cd BADGE
-
Create a virtual environment
conda env create -f environment.yaml
-
Activate a virtual environment
conda activate BADGE
-
We sample 10 badminton games spanning the years 2018 to 2021 from ShuttleSet [Wang et al., 2023b]
-
Place the data in the
data/
folder -
To preprocess the data into CSV and QA formats, please modify the game, player_A, player_B, and competition parameters in
code/data/CSV+QA.py
and runpython code/data/CSV+QA.py
-
The CSV formatted data will be placed in the
CSV/<game>/
folder -
The QA formatted data will be placed in the
QA/<game>/
folder
-
Prompts for different Input Data Types and In-Context Learnings are placed in the
prompt/generation/
folder -
Please export your OpenAI API keys first
export OPENAI_API_KEY=<your_key>
-
To generate reports using GPT-3.5 (gpt-3.5-turbo-0125), please run
python code/generation/GPT-3.5.py
Generated reports will be placed in the
generation/<game>/GPT-3.5/
folderIt will cost about $0.11 to run this program!
-
To generate reports using GPT-4 (gpt-4-turbo-2024-04-09), please run
python code/generation/GPT-4.py
Generated reports will be placed in the
generation/<game>/GPT-4/
folderIt will cost about $2.50 to run this program!
-
Human-written reports will be placed in the
generation/<game>/human/
folder
-
Prompts for different Evaluation Criteria are placed in the
prompt/evaluation/
folder -
Please export your OpenAI API keys first
export OPENAI_API_KEY=<your_key>
-
To use GPT-4 (gpt-4-turbo-2024-04-09) to evaluate the reports generated by GPT-3.5, please run
python code/evaluation/GPT-3.5.py
Evaluated scores will be placed in the
evaluation/<game>/GPT-3.5/
folderIt will cost about $3.00 to run this program!
-
To use GPT-4 (gpt-4-turbo-2024-04-09) to evaluate the reports generated by GPT-4, please run
python code/evaluation/GPT-4.py
Evaluated scores will be placed in the
evaluation/<game>/GPT-4/
folderIt will cost about $3.00 to run this program!
-
To use GPT-4 (gpt-4-turbo-2024-04-09) to evaluate the reports written by humans, please run
python code/evaluation/human.py
Evaluated scores will be placed in the
evaluation/<game>/human/
folderIt will cost about $0.37 to run this program!
- Open the link to demo website
- Select options for Game, Data Type, In-Context Learning, and Large Language Models to generate reports
- You can expand the prompt area to see the full generation prompt
- Click the Generate button to generate the report
- The generated report will be displayed below
- Select options for Evaluation Criteria to evaluate reports
- You can also expand the prompt area to see the full evaluation prompt
- Click the Evaluate button to evaluate the report
- The evaluated score will be displayed below
@misc{chiang2024badgebadmintonreportgeneration,
title={BADGE: BADminton report Generation and Evaluation with LLM},
author={Shang-Hsuan Chiang and Lin-Wei Chao and Kuang-Da Wang and Chih-Chuan Wang and Wen-Chih Peng},
year={2024},
eprint={2406.18116},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.18116},
}
- Shang-Hsuan Chiang ([email protected])
- Lin-Wei Chao ([email protected])
- Kuang-Da Wang ([email protected])
- Chih-Chuan Wang ([email protected])
- Wen-Chih Peng ([email protected])