LFD: Layer Fused Decoding to Exploit External Knowledge in Retrieval-Augmented Generation

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the low efficiency of external knowledge utilization in retrieval-augmented generation (RAG). We propose a novel knowledge fusion paradigm grounded in layer-wise functional analysis of large language models (LLMs). Using a noise-injection probing method, we systematically characterize how factual knowledge is encoded across layers and find that intermediate layers exhibit superior capacity for integrating retrieved external information. Building on this insight, we design Layer-Fusion Decoding (LFD), a strategy that dynamically incorporates intermediate-layer representations during autoregressive decoding. Furthermore, we introduce Internal Knowledge Scoring (IKS), an automated criterion for selecting the optimal fusion layer. Evaluated on multiple RAG benchmarks, our approach significantly improves factual consistency and answer quality while incurring negligible computational overhead. The method advances interpretable and controllable knowledge-enhanced generation by explicitly leveraging the functional specialization of LLM layers.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) incorporates external knowledge into large language models (LLMs), improving their adaptability to downstream tasks and enabling information updates. Surprisingly, recent empirical evidence demonstrates that injecting noise into retrieved relevant documents paradoxically facilitates exploitation of external knowledge and improves generation quality. Although counterintuitive and challenging to apply in practice, this phenomenon enables granular control and rigorous analysis of how LLMs integrate external knowledge. Therefore, in this paper, we intervene on noise injection and establish a layer-specific functional demarcation within the LLM: shallow layers specialize in local context modeling, intermediate layers focus on integrating long-range external factual knowledge, and deeper layers primarily rely on parametric internal knowledge. Building on this insight, we propose Layer Fused Decoding (LFD), a simple decoding strategy that directly combines representations from an intermediate layer with final-layer decoding outputs to fully exploit the external factual knowledge. To identify the optimal intermediate layer, we introduce an internal knowledge score (IKS) criterion that selects the layer with the lowest IKS value in the latter half of layers. Experimental results across multiple benchmarks demonstrate that LFD helps RAG systems more effectively surface retrieved context knowledge with minimal cost.
Problem

Research questions and friction points this paper is trying to address.

Exploiting external knowledge integration in retrieval-augmented generation
Identifying layer-specific functional demarcation within large language models
Improving generation quality through optimized decoding strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer Fused Decoding combines intermediate and final layer outputs
Internal Knowledge Score selects optimal intermediate integration layer
Strategy exploits external factual knowledge through layer fusion
🔎 Similar Papers
No similar papers found.
Y
Yang Sun
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University
Lixin Zou
Lixin Zou
Wuhan University
Information RetrievalRecommender SystemReinforcement LearningLarge Language Model
D
Dan Luo
Lehigh University
Z
Zhiyong Xie
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University
L
Long Zhang
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University
Liming Dong
Liming Dong
CSIRO Data61
Software EngineeringSoftware TraceabilityData QualityDevOpsAgentOps
Y
Yunwei Zhao
CNCERT/CC
Xixun Lin
Xixun Lin
Institute of Information Engineering, Chinese Academy of Sciences
Data miningGraph representation learningLarge language model
Y
Yanxiong Lu
Search Team, WeChat, Tencent Inc.
C
Chenliang Li
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University