🤖 AI Summary
Large language models (LLMs) are susceptible to irrelevant or noisy context in inputs, degrading generation quality and increasing computational overhead—a pervasive issue in RAG, table question answering, and in-context learning. This paper proposes a fine-tuning-free early-noise filtering mechanism: a lightweight linear probe is trained on the LLM’s initial-layer hidden states to dynamically assess the importance of text chunks, enabling low-value segments to be discarded during early inference stages. Our key contribution is the first empirical identification that LLMs implicitly encode token-level information value prior to generation—enabling an efficient, general-purpose zero-shot noise detection and filtering strategy. Experiments across multiple LLMs and diverse short/long-context tasks demonstrate consistent accuracy gains (average +3.2%), alongside 20–35% reductions in KV cache usage and computational cost, while enhancing model focus on salient information.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of natural language processing tasks. However, they are often distracted by irrelevant or noisy context in input sequences that degrades output quality. This problem affects both long- and short-context scenarios, such as retrieval-augmented generation, table question-answering, and in-context learning. We reveal that LLMs can implicitly identify whether input sequences contain useful information at early layers, prior to token generation. Leveraging this insight, we introduce Early Noise Dropping ( extsc{END}), a novel approach to mitigate this issue without requiring fine-tuning the LLMs. extsc{END} segments input sequences into chunks and employs a linear prober on the early layers of LLMs to differentiate between informative and noisy chunks. By discarding noisy chunks early in the process, extsc{END} preserves critical information, reduces distraction, and lowers computational overhead. Extensive experiments demonstrate that extsc{END} significantly improves both performance and efficiency across different LLMs on multiple evaluation datasets. Furthermore, by investigating LLMs' implicit understanding to the input with the prober, this work also deepens understanding of how LLMs do reasoning with contexts internally.