Few-Shot, No Problem: Descriptive Continual Relation Extraction

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Few-shot continual relation extraction suffers from catastrophic forgetting and difficulty in knowledge consolidation, especially under data scarcity, where modeling relational semantics and mitigating overfitting remain challenging. To address this, we propose the first description-driven retrieval-based continual learning paradigm: leveraging large language models to generate structured relational descriptions, constructing a dual-encoder retrieval framework that jointly encodes class prototypes and semantic descriptions, and designing a reciprocal rank fusion (RRF)-based prediction mechanism for robust inference. Our method achieves significant improvements over state-of-the-art approaches across multiple benchmarks, demonstrating superior stability, strong generalization capability, and effective forgetting mitigation. It establishes a novel paradigm for continual relation learning in low-resource, dynamic environments.

Technology Category

Application Category

📝 Abstract
Few-shot Continual Relation Extraction is a crucial challenge for enabling AI systems to identify and adapt to evolving relationships in dynamic real-world domains. Traditional memory-based approaches often overfit to limited samples, failing to reinforce old knowledge, with the scarcity of data in few-shot scenarios further exacerbating these issues by hindering effective data augmentation in the latent space. In this paper, we propose a novel retrieval-based solution, starting with a large language model to generate descriptions for each relation. From these descriptions, we introduce a bi-encoder retrieval training paradigm to enrich both sample and class representation learning. Leveraging these enhanced representations, we design a retrieval-based prediction method where each sample"retrieves"the best fitting relation via a reciprocal rank fusion score that integrates both relation description vectors and class prototypes. Extensive experiments on multiple datasets demonstrate that our method significantly advances the state-of-the-art by maintaining robust performance across sequential tasks, effectively addressing catastrophic forgetting.
Problem

Research questions and friction points this paper is trying to address.

Addresses few-shot continual relation extraction challenges
Overcomes overfitting and data scarcity in dynamic domains
Proposes retrieval-based method to prevent catastrophic forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large language model for relation descriptions
Implements bi-encoder retrieval training paradigm
Employs retrieval-based prediction with reciprocal rank fusion
🔎 Similar Papers
No similar papers found.
N
Nguyen Xuan Thanh
Oraichain Labs
A
Anh Duc Le
Hanoi University of Science and Technology
Q
Quyen Tran
VinAI Research
Thanh-Thien Le
Thanh-Thien Le
AI Researcher, VinAI Research
Natural Language ProcessingMachine LearningContinual Learning
L
L. Van
Hanoi University of Science and Technology
Thien Huu Nguyen
Thien Huu Nguyen
University of Oregon
Information ExtractionDeep LearningNatural Language ProcessingMachine Learning