Adaptive Prompting for Continual Relation Extraction: A Within-Task Variance Perspective

📅 2024-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting in continual relation extraction (CRE), this paper proposes a memory-replay-free adaptive prompting method. The approach constructs task-specific prompt pools to model intra-task variation, integrates Prefix-tuning with the Mixture-of-Experts (MoE) paradigm, and incorporates generative knowledge distillation to implicitly consolidate prior knowledge—eliminating the need for explicit storage of historical data. Notably, it is the first work to design prompt structures explicitly from the perspective of intra-task variation, thereby jointly modeling both inter-task discrepancies and intra-task variation. Evaluated on multiple CRE benchmarks, the method consistently outperforms existing prompt-based and replay-free approaches, achieving average F1-score improvements of 3.2–5.7 percentage points.

Technology Category

Application Category

📝 Abstract
To address catastrophic forgetting in Continual Relation Extraction (CRE), many current approaches rely on memory buffers to rehearse previously learned knowledge while acquiring new tasks. Recently, prompt-based methods have emerged as potent alternatives to rehearsal-based strategies, demonstrating strong empirical performance. However, upon analyzing existing prompt-based approaches for CRE, we identified several critical limitations, such as inaccurate prompt selection, inadequate mechanisms for mitigating forgetting in shared parameters, and suboptimal handling of cross-task and within-task variances. To overcome these challenges, we draw inspiration from the relationship between prefix-tuning and mixture of experts, proposing a novel approach that employs a prompt pool for each task, capturing variations within each task while enhancing cross-task variances. Furthermore, we incorporate a generative model to consolidate prior knowledge within shared parameters, eliminating the need for explicit data storage. Extensive experiments validate the efficacy of our approach, demonstrating superior performance over state-of-the-art prompt-based and rehearsal-free methods in continual relation extraction.
Problem

Research questions and friction points this paper is trying to address.

Catastrophic Forgetting
Continuous Learning
Relations Extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Prompt Pool
Generative Model
Catastrophic Forgetting Avoidance
🔎 Similar Papers
No similar papers found.
M
Minh Le
VinAI Research
T
Tien Ngoc Luu
Hanoi University of Science and Technology
An Nguyen The
An Nguyen The
FPT Software AI Center
Thanh-Thien Le
Thanh-Thien Le
AI Researcher, VinAI Research
Natural Language ProcessingMachine LearningContinual Learning
Trang Nguyen
Trang Nguyen
Technical Staff, MIT Lincoln Laboratory
Natural Language ProcessingLarge Language ModelsExplainable AICyber Analytics
T
Thanh Tung Nguyen
Moreh Inc.
L
L. Van
Hanoi University of Science and Technology
T
T. Nguyen