K-DeCore: Facilitating Knowledge Transfer in Continual Structured Knowledge Reasoning via Knowledge Decoupling

๐Ÿ“… 2025-09-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing continual learning methods for continual structured knowledge reasoning (CSKR) suffer from poor generalization across tasks and inefficient inference due to parameter growth with task accumulation. Method: We propose a knowledge-decoupling framework that separates reasoning into a task-agnostic, structure-aware module and task-specific modules to enable cross-task knowledge transfer. We further design a dual-perspective memory consolidation mechanism and a structure-guided pseudo-data generation strategy, jointly integrating memory replay, knowledge distillation, and structured pseudo-sample synthesisโ€”all within a fixed parameter budget. The framework supports diverse large language model backbones. Contribution/Results: Our approach achieves significant improvements over state-of-the-art methods on four CSKR benchmarks, simultaneously enhancing continual learning performance and cross-task generalization. It effectively mitigates catastrophic forgetting and parameter explosion, offering scalable and efficient continual structured reasoning.

Technology Category

Application Category

๐Ÿ“ Abstract
Continual Structured Knowledge Reasoning (CSKR) focuses on training models to handle sequential tasks, where each task involves translating natural language questions into structured queries grounded in structured knowledge. Existing general continual learning approaches face significant challenges when applied to this task, including poor generalization to heterogeneous structured knowledge and inefficient reasoning due to parameter growth as tasks increase. To address these limitations, we propose a novel CSKR framework, extsc{K-DeCore}, which operates with a fixed number of tunable parameters. Unlike prior methods, extsc{K-DeCore} introduces a knowledge decoupling mechanism that disentangles the reasoning process into task-specific and task-agnostic stages, effectively bridging the gaps across diverse tasks. Building on this foundation, extsc{K-DeCore} integrates a dual-perspective memory consolidation mechanism for distinct stages and introduces a structure-guided pseudo-data synthesis strategy to further enhance the model's generalization capabilities. Extensive experiments on four benchmark datasets demonstrate the superiority of extsc{K-DeCore} over existing continual learning methods across multiple metrics, leveraging various backbone large language models.
Problem

Research questions and friction points this paper is trying to address.

Improving generalization to heterogeneous structured knowledge in continual learning
Addressing inefficient reasoning due to parameter growth with task increases
Bridging knowledge gaps across diverse sequential reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge decoupling mechanism for task-specific and task-agnostic stages
Dual-perspective memory consolidation for distinct stages
Structure-guided pseudo-data synthesis for generalization enhancement
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yongrui Chen
School of Computer Science and Engineering, Southeast University, China
Y
Yi Huang
China Mobile Research Institute
Y
Yunchang Liu
Shenyu Zhang
Shenyu Zhang
Southeast University
Natural Language Processing
J
Junhao He
School of Computer Science and Engineering, Southeast University, China
T
Tongtong Wu
Department of Data Science & AI, Monash University, Australia
Guilin Qi
Guilin Qi
Southeast University
Artificial Intelligenceontology
Tianxing Wu
Tianxing Wu
Ph.D. Student, Nanyang technological university
Computer Vision