🤖 AI Summary
Large language models (LLMs) often suffer from context information decay in internal representations, leading to low answer faithfulness. To address this, we propose Context-aware Layer Enhancement (CaLE), the first method to introduce the V-usable information theory into LLM representation intervention. CaLE quantitatively analyzes context information flow across layers to precisely identify and amplify the growth pathways of context knowledge in critical layers. It enables layer-adaptive, interpretable reweighting of hidden-layer features—moving beyond conventional decoding-only optimization paradigms. Evaluated on multi-source conflicting and unknown-context question answering tasks, CaLE significantly improves answer faithfulness (+12.7% F1) while preserving generation fluency and leaving the model’s original capabilities intact.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in various tasks, yet they often struggle with context-faithfulness generations that properly reflect contextual knowledge. While existing approaches focus on enhancing the decoding strategies, they ignore the fundamental mechanism of how contextual information is processed within LLMs' internal states. As a result, LLMs remain limited in their ability to fully leverage contextual knowledge. In this paper, we propose Context-aware Layer Enhancement (CaLE), a novel intervention method that enhances the utilization of contextual knowledge within LLMs' internal representations. By employing V-usable information analysis, CaLE strategically amplifies the growth of contextual information at an optimal layer, thereby enriching representations in the final layer. Our experiments demonstrate that CaLE effectively improves context-faithful generation in Question-Answering tasks, particularly in scenarios involving unknown or conflicting contextual knowledge.