🤖 AI Summary
Existing retrieval-augmented generation (RAG) methods struggle to harmonize internal parametric knowledge with external retrieved knowledge, leading to insufficient factual consistency and robustness. To address this, we propose a parameter-decoupled cognitive framework that, for the first time, jointly localizes capability-specific subspaces—where knowledge representations reside—via forward and backward dual-path signals. We further introduce a type-customized low-rank fine-tuning strategy to enable dynamic, interpretable selection of knowledge sources. Our approach comprises three core components: critical parameter identification, capability-oriented subspace decoupling, and task-aware fine-tuning. Extensive evaluation across multiple datasets and architectures (LLaMA, Qwen, ChatGLM) demonstrates substantial improvements: +12.7% in knowledge-source selection accuracy and +9.3 BLEU-Fact points in generation fidelity. The method exhibits strong generalization capability and architecture-agnostic performance.
📝 Abstract
Retrieval-Augmented Generation (RAG) offers an effective solution to the issues faced by Large Language Models (LLMs) in hallucination generation and knowledge obsolescence by incorporating externally retrieved knowledge. However, existing methods lack effective control mechanisms for integrating internal and external knowledge. Inspired by human cognitive processes, we propose Parenting, a novel framework that decouples, identifies, and purposefully optimizes parameter subspaces related to adherence and robustness. Specifically, Parenting utilizes a key parameter mining method that combines forward and backward propagation signals to localize subspaces representing different capabilities. Then, Parenting employs a type-tailored tuning strategy, applying specific and appropriate optimizations to different subspaces, aiming to achieve a balanced enhancement of both adherence and robustness. Extensive experiments on various datasets and models validate the effectiveness and generalizability of our method.