DySCO: Dynamic Attention-Scaling Decoding for Long-Context LMs

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of language models in long-context reasoning due to attention dispersion. To mitigate this issue, the authors propose DySCO, a training-free decoding algorithm that dynamically identifies and amplifies attention weights on task-relevant tokens during generation, thereby enhancing long-context comprehension. DySCO introduces, for the first time, dedicated attention heads at decoding time specifically designed for long-context retrieval, combined with a dynamic reweighting mechanism that enables plug-and-play enhancement of off-the-shelf models while offering interpretability. Evaluated on benchmarks such as MRCR and LongBenchV2, DySCO achieves up to a 25% relative performance improvement at a context length of 128K, with only minimal additional computational overhead.

Technology Category

Application Category

📝 Abstract
Understanding and reasoning over long contexts is a crucial capability for language models (LMs). Although recent models support increasingly long context windows, their accuracy often deteriorates as input length grows. In practice, models often struggle to keep attention aligned with the most relevant context throughout decoding. In this work, we propose DySCO, a novel decoding algorithm for improving long-context reasoning. DySCO leverages retrieval heads--a subset of attention heads specialized for long-context retrieval--to identify task-relevant tokens at each decoding step and explicitly up-weight them. By doing so, DySCO dynamically adjusts attention during generation to better utilize relevant context. The method is training-free and can be applied directly to any off-the-shelf LMs. Across multiple instruction-tuned and reasoning models, DySCO consistently improves performance on challenging long-context reasoning benchmarks, yielding relative gains of up to 25% on MRCR and LongBenchV2 at 128K context length with modest additional compute. Further analysis highlights the importance of both dynamic attention rescaling and retrieval-head-guided selection for the effectiveness of the method, while providing interpretability insights into decoding-time attention behavior. Our code is available at https://github.com/princeton-pli/DySCO.
Problem

Research questions and friction points this paper is trying to address.

long-context reasoning
attention alignment
language models
decoding
context length
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Attention Scaling
Retrieval Heads
Long-Context Reasoning
Training-Free Decoding
Attention Rescaling
🔎 Similar Papers
No similar papers found.
Xi Ye
Xi Ye
Princeton University
Natural Language Processing
W
Wuwei Zhang
Princeton Language and Intelligence, Princeton University
F
Fangcong Yin
Department of Computer Science, New York University
Howard Yen
Howard Yen
Princeton University
Natural language processing
Danqi Chen
Danqi Chen
Princeton University
Natural Language ProcessingMachine Learning