RUIE: Retrieval-based Unified Information Extraction using Large Language Model

📅 2024-09-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and poor zero-shot generalization of large language models (LLMs) in Unified Information Extraction (UIE), this paper proposes RUIE, a lightweight retrieval-augmented framework. Methodologically, RUIE introduces the first trainable dual-encoder retriever specifically designed for UIE; integrates LLM preference ranking with keyword-enhanced reward modeling; and jointly optimizes via contrastive learning and knowledge distillation. Crucially, RUIE enables zero-shot adaptation to diverse IE tasks through in-context learning—eliminating the need for instruction tuning. Experiments across eight unseen datasets show that RUIE achieves an average F1 score 19.22 points higher than instruction-tuned baselines and 3.13 points higher than existing retrieval-based methods. Moreover, it demonstrates strong scalability across LLMs of varying sizes. Ablation studies validate the effectiveness of each component.

Technology Category

Application Category

📝 Abstract
Unified information extraction (UIE) aims to complete all information extraction tasks using a single model or framework. While previous work has primarily focused on instruction-tuning large language models (LLMs) with constructed datasets, these methods require significant computational resources and struggle to generalize to unseen tasks. To address these limitations, we propose RUIE (Retrieval-based Unified Information Extraction), a framework that leverages in-context learning to enable rapid generalization while reducing computational costs. The key challenge in RUIE is selecting the most beneficial demonstrations for LLMs to effectively handle diverse IE tasks. To achieve this, we integrate LLM preferences for ranking candidate demonstrations and design a keyword-enhanced reward model to capture fine-grained relationships between queries and demonstrations. We then train a bi-encoder retriever for UIE through contrastive learning and knowledge distillation. To the best of our knowledge, RUIE is the first trainable retrieval framework for UIE. Experimental results on 8 held-out datasets demonstrate RUIE's effectiveness in generalizing to unseen tasks, with average F1-score improvements of 19.22 and 3.13 compared to instruction-tuning methods and other retrievers, respectively. Further analysis confirms RUIE's adaptability to LLMs of varying sizes and the importance of its key components.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Information Extraction
Generalization Capability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale Language Models
UIE Retrieval Framework
Adaptive Learning Strategies
🔎 Similar Papers
No similar papers found.
X
Xincheng Liao
Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha, Hunan, China
Junwen Duan
Junwen Duan
Central South University
Artificial IntelligenceNatural Language ProcessingSocial Computing
Y
Yixi Huang
Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha, Hunan, China
Jianxin Wang
Jianxin Wang
School of Computer Science and Engineering, Central South university
AlgorithmBioinformaticsComputer Network