๐ค AI Summary
Existing continual learning methods for continual structured knowledge reasoning (CSKR) suffer from poor generalization across tasks and inefficient inference due to parameter growth with task accumulation. Method: We propose a knowledge-decoupling framework that separates reasoning into a task-agnostic, structure-aware module and task-specific modules to enable cross-task knowledge transfer. We further design a dual-perspective memory consolidation mechanism and a structure-guided pseudo-data generation strategy, jointly integrating memory replay, knowledge distillation, and structured pseudo-sample synthesisโall within a fixed parameter budget. The framework supports diverse large language model backbones. Contribution/Results: Our approach achieves significant improvements over state-of-the-art methods on four CSKR benchmarks, simultaneously enhancing continual learning performance and cross-task generalization. It effectively mitigates catastrophic forgetting and parameter explosion, offering scalable and efficient continual structured reasoning.
๐ Abstract
Continual Structured Knowledge Reasoning (CSKR) focuses on training models to handle sequential tasks, where each task involves translating natural language questions into structured queries grounded in structured knowledge. Existing general continual learning approaches face significant challenges when applied to this task, including poor generalization to heterogeneous structured knowledge and inefficient reasoning due to parameter growth as tasks increase. To address these limitations, we propose a novel CSKR framework, extsc{K-DeCore}, which operates with a fixed number of tunable parameters. Unlike prior methods, extsc{K-DeCore} introduces a knowledge decoupling mechanism that disentangles the reasoning process into task-specific and task-agnostic stages, effectively bridging the gaps across diverse tasks. Building on this foundation, extsc{K-DeCore} integrates a dual-perspective memory consolidation mechanism for distinct stages and introduces a structure-guided pseudo-data synthesis strategy to further enhance the model's generalization capabilities. Extensive experiments on four benchmark datasets demonstrate the superiority of extsc{K-DeCore} over existing continual learning methods across multiple metrics, leveraging various backbone large language models.