Post-Training Language Models for Continual Relation Extraction

๐Ÿ“… 2025-04-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Real-world textual data (e.g., news, social media posts) exhibit high dynamism, rendering conventional static relation extraction (RE) inadequate for real-time knowledge graph (KG) construction. To address this, we systematically investigate the application of large language models (LLMs) to continual relation extraction (CRE), proposing a memory-replay-based continual learning framework to mitigate catastrophic forgetting. We evaluate task-incremental fine-tuning on TACRED and FewRel, comparing decoder-only architectures (Mistral-7B, Llama2-7B) against an encoder-decoder model (Flan-T5 Base). Results show that LLMs substantially outperform BERT-style encoders: they achieve state-of-the-art overall and average accuracy on TACRED and rank second on FewRel. Our study reveals the critical impact of architectural choice on cross-task knowledge transfer and empirically validates the effectiveness of the LLM-plus-memory-replay paradigm for dynamic KG construction.

Technology Category

Application Category

๐Ÿ“ Abstract
Real-world data, such as news articles, social media posts, and chatbot conversations, is inherently dynamic and non-stationary, presenting significant challenges for constructing real-time structured representations through knowledge graphs (KGs). Relation Extraction (RE), a fundamental component of KG creation, often struggles to adapt to evolving data when traditional models rely on static, outdated datasets. Continual Relation Extraction (CRE) methods tackle this issue by incrementally learning new relations while preserving previously acquired knowledge. This study investigates the application of pre-trained language models (PLMs), specifically large language models (LLMs), to CRE, with a focus on leveraging memory replay to address catastrophic forgetting. We evaluate decoder-only models (eg, Mistral-7B and Llama2-7B) and encoder-decoder models (eg, Flan-T5 Base) on the TACRED and FewRel datasets. Task-incremental fine-tuning of LLMs demonstrates superior performance over earlier approaches using encoder-only models like BERT on TACRED, excelling in seen-task accuracy and overall performance (measured by whole and average accuracy), particularly with the Mistral and Flan-T5 models. Results on FewRel are similarly promising, achieving second place in whole and average accuracy metrics. This work underscores critical factors in knowledge transfer, language model architecture, and KG completeness, advancing CRE with LLMs and memory replay for dynamic, real-time relation extraction.
Problem

Research questions and friction points this paper is trying to address.

Adapting relation extraction to dynamic, non-stationary real-world data
Addressing catastrophic forgetting in continual relation extraction
Evaluating LLMs for superior performance in relation extraction tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes pre-trained language models for continual relation extraction
Employs memory replay to mitigate catastrophic forgetting
Evaluates decoder-only and encoder-decoder models on dynamic datasets
๐Ÿ”Ž Similar Papers
No similar papers found.