Short-Context Dominance: How Much Local Context Natural Language Actually Needs?

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a “short-context dominance” phenomenon in natural language: approximately 75–80% of generated tokens can be accurately predicted using only the last 96 context tokens. To detect and strengthen modeling of long-range dependencies, we propose Distribution-aware Minimum Context Length (DaMCL) as a computable proxy metric for context-length requirements. We further design a detection–enhancement decoding framework compatible with non-greedy sampling, which dynamically mitigates models’ intrinsic bias toward short contexts. Crucially, we leverage large language models as statistical oracles to infer context-length demands without human annotation. Extensive multi-task and multi-model experiments demonstrate significant improvements in generation quality for long-range dependent tokens and downstream question-answering performance. Our approach establishes a new paradigm for optimizing context efficiency and enhancing long-range reasoning capabilities in language modeling.

Technology Category

Application Category

📝 Abstract
We investigate the short-context dominance hypothesis: that for most sequences, a small local prefix suffices to predict their next tokens. Using large language models as statistical oracles, we measure the minimum context length (MCL) needed to reproduce accurate full-context predictions across datasets with sequences of varying lengths. For sequences with 1-7k tokens from long-context documents, we consistently find that 75-80% require only the last 96 tokens at most. Given the dominance of short-context tokens, we then ask whether it is possible to detect challenging long-context sequences for which a short local prefix does not suffice for prediction. We introduce a practical proxy to MCL, called Distributionally Aware MCL (DaMCL), that does not require knowledge of the actual next-token and is compatible with sampling strategies beyond greedy decoding. Our experiments validate that simple thresholding of the metric defining DaMCL achieves high performance in detecting long vs. short context sequences. Finally, to counter the bias that short-context dominance induces in LLM output distributions, we develop an intuitive decoding algorithm that leverages our detector to identify and boost tokens that are long-range-relevant. Across Q&A tasks and model architectures, we confirm that mitigating the bias improves performance.
Problem

Research questions and friction points this paper is trying to address.

Investigates if small local context suffices for next-token prediction in sequences
Develops a method to detect sequences needing long-range context for prediction
Mitigates bias in LLM outputs caused by short-context dominance to improve performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Measure minimum context length for token prediction
Introduce Distributionally Aware MCL as proxy metric
Develop decoding algorithm to boost long-range tokens
🔎 Similar Papers