This repository contains a comprehensive survey of recent advancements in Relation Extraction (RE) using Large Language Models (LLMs). The paper explores cutting-edge techniques and methodologies that are transforming the field of information extraction.
The research provides an in-depth analysis of three primary approaches to relation extraction with LLMs:
-
Prompt Design
- Explores techniques like Chain of Thought (CoT)
- Demonstrates how carefully crafted prompts can improve model performance
-
Alignment Techniques
- Addresses challenges in low-incidence tasks
- Introduces innovative approaches like QA4RE and RAG4RE
- Shows how reformulating relation extraction can unlock LLM capabilities
-
Universal Information Extraction (UIE)
- Proposes a unified framework for information extraction tasks
- Aims to break down silos between different information extraction approaches
- Significant performance improvements across multiple benchmarks
- Successful techniques for addressing LLM limitations in relation extraction
- Promising directions for future research in information extraction
- DocRED
- TACRED
- New York Times Annotated Corpus
- CoNLL04
- ACE 2005
- Improved pre-training strategies
- Multilingual dataset integration
- Document-level relation extraction
- Knowledge base-aware models
- Unified information extraction frameworks
If you use this work in your research, please cite the original paper:
@article{Sachan2023RelationExtraction,
title={State of relation extraction using LLMs: A report},
author={Vangmay Sachan and Yanfei Dong},
year={2023}
}
This research was conducted as part of the Odyssey 2023/2024 program at the National University of Singapore.
Feel free to contact me for further information!