Knowledge Integration Decay in Search-Augmented Reasoning of Large Language Models

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies and formally defines the Knowledge Integration Decay (KID) problem in search-augmented reasoning, wherein large language models struggle to effectively incorporate subsequently retrieved knowledge as reasoning chains grow longer. To address this issue, the authors propose Self-Anchored Knowledge Encoding (SAKE), a training-free, inference-time strategy that dynamically preserves the semantic integrity of critical external knowledge by anchoring it at both the beginning and end of the reasoning process. Experimental results demonstrate that SAKE significantly mitigates KID and substantially improves performance on multi-hop question answering and complex reasoning benchmarks, particularly in long-chain reasoning scenarios.

Technology Category

Application Category

📝 Abstract
Modern Large Language Models (LLMs) have demonstrated remarkable capabilities in complex tasks by employing search-augmented reasoning to incorporate external knowledge into long chains of thought. However, we identify a critical yet underexplored bottleneck in this paradigm, termed Knowledge Integration Decay (KID). Specifically, we observe that as the length of reasoning generated before search grows, models increasingly fail to integrate retrieved evidence into subsequent reasoning steps, limiting performance even when relevant information is available. To address this, we propose Self-Anchored Knowledge Encoding (SAKE), a training-free inference-time strategy designed to stabilize knowledge utilization. By anchoring retrieved knowledge at both the beginning and end of the reasoning process, SAKE prevents it from being overshadowed by prior context, thereby preserving its semantic integrity. Extensive experiments on multi-hop QA and complex reasoning benchmarks demonstrate that SAKE significantly mitigates KID and improves performance, offering a lightweight yet effective solution for knowledge integration in agentic LLMs.
Problem

Research questions and friction points this paper is trying to address.

Knowledge Integration Decay
Search-Augmented Reasoning
Large Language Models
Reasoning Chains
Knowledge Integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge Integration Decay
Search-Augmented Reasoning
Self-Anchored Knowledge Encoding
Large Language Models
Inference-Time Strategy
🔎 Similar Papers
No similar papers found.