Skip to content

Implementation of the EMNLP 2024 paper from the MaiNLP Lab - "Seeing the Big through the Small": Can LLMs Approximate Human Judgment Distributions on NLI from a Few Explanations?

License

Notifications You must be signed in to change notification settings

mainlp/MJD-Estimator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MJD-Estimator

Implementation of the EMNLP 2024 paper from the MaiNLP Lab - "Seeing the Big through the Small": Can LLMs Approximate Human Judgment Distributions on NLI from a Few Explanations? (paper)

This repository contains the generator, evaluator and fine-tuning implementation for Model Judgment Distribution (MJDs) extracted from LLMs. We take Llama 3 as an example.

MJD-generator: including the method to extract MJDs by first-token-probability (Section 3 in paper), the evaluation for distribution comparison (Section 4.3 in paper), and the code for ternary visualization (Section 6.1 in paper).

MJD-fine-tuning: including the fine-tuning implementation for fine-tuning comparison (Section 4.4 in paper).

Overall Structure

Image text

Citation

If you use this code, please cite the paper below:

"Seeing the Big through the Small": Can LLMs Approximate Human Judgment Distributions on NLI from a Few Explanations?

@inproceedings{chen-etal-2024-seeing,
    title = "{``}Seeing the Big through the Small{''}: Can {LLM}s Approximate Human Judgment Distributions on {NLI} from a Few Explanations?",
    author = "Chen, Beiduo  and
      Wang, Xinpeng  and
      Peng, Siyao  and
      Litschko, Robert  and
      Korhonen, Anna  and
      Plank, Barbara",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.findings-emnlp.842",
    pages = "14396--14419",
    abstract = "Human label variation (HLV) is a valuable source of information that arises when multiple human annotators provide different labels for valid reasons. In Natural Language Inference (NLI) earlier approaches to capturing HLV involve either collecting annotations from many crowd workers to represent human judgment distribution (HJD) or use expert linguists to provide detailed explanations for their chosen labels. While the former method provides denser HJD information, obtaining it is resource-intensive. In contrast, the latter offers richer textual information but it is challenging to scale up to many human judges. Besides, large language models (LLMs) are increasingly used as evaluators ({``}LLM judges{''}) but with mixed results, and few works aim to study HJDs. This study proposes to exploit LLMs to approximate HJDs using a small number of expert labels and explanations. Our experiments show that a few explanations significantly improve LLMs{'} ability to approximate HJDs with and without explicit labels, thereby providing a solution to scale up annotations for HJD. However, fine-tuning smaller soft-label aware models with the LLM-generated model judgment distributions (MJDs) presents partially inconsistent results: while similar in distance, their resulting fine-tuned models and visualized distributions differ substantially. We show the importance of complementing instance-level distance measures with a global-level shape metric and visualization to more effectively evaluate MJDs against human judgment distributions.",
}

Getting Started

Setting up the code environment

$ conda env create -f <...>.yaml

notice that there are two conda environments for generator and fine-tuning.

Datasets

The datasets for our experiments are from ChaosNLI and VariErrNLI.

ChaosNLI: NLI dataset with human judgment distributions (HJDs) from 100 crowd-workers. (paper, data)

VariErrNLI: NLI dataset annotated with explanations by 4 linguistic experts. (paper, data)

We did a pre-process to extract the target filtered dataset as NLI_explanations.json, which contains 341 overlapped NLI instances with 4 explanations for each.

Running

1. Move into the folder of module you chose

cd MJD-generator or cd MJD-fine-tuning

Before you running any file, you need to modify the arguments to your own paths or hyper-parameters at first.

2. Generation

Generate the MJDs from Llama 3. The Llama 3 model is from HuggingFace

ipython MJD-generator.ipynb

3. Evaluation

Evaluate the MJDs with HJDs on distribution comparison metrics, and visualize the ternary plots.

ipython MJD-evaluate.ipynb

3. Fine-Tuning

Fine-tuning from a pretrained NLI model.

cd MJD-fine-tuning
bash train.sh

License

The code under this repository is licensed under the Apache 2.0 License.

About

Implementation of the EMNLP 2024 paper from the MaiNLP Lab - "Seeing the Big through the Small": Can LLMs Approximate Human Judgment Distributions on NLI from a Few Explanations?

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published