On the Nature of Attention Sink that Shapes Decoding Strategy in MLLMs

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the semantic nature of attention sinks in multimodal large language models and their impact on decoding. The authors propose OutRo, a lightweight decoding strategy that enhances contextual representations during inference by aligning features in the embedding space and relaxing the causal attention constraint, without requiring additional forward passes or explicit attention map computation. Introducing only a 1.1× increase in decoding overhead, OutRo consistently improves performance across seven video question-answering benchmarks, demonstrating strong generalization capabilities while maintaining compatibility with mainstream architectures.

Technology Category

Application Category

📝 Abstract
Large language models and their multimodal extensions have achieved remarkable success across diverse tasks, yet the internal mechanisms that govern their reasoning behaviour remain partially understood. In particular, the attention sink, a token that attracts disproportionate attention mass, has been observed in transformer architectures, but its role is still unclear. Our goal is to understand what attention sinks represent and how they shape model behaviour during inference, rather than considering them as incidental artifacts. Through our analysis, we find that attention sink representations encode structured global information that influences the decoding process. Building on our findings, we introduce OutRo, a lightweight inference-time strategy that leverages the sink token to enhance contextual representations: (i) non-sink token representations are aligned with the sink representation in the feature space; and (ii) the sink token is allowed to attend beyond the causal constraint, facilitating information exchange with non-sink tokens. This design enhances the reasoning process without requiring additional forward passes or access to attention maps. Based on extensive experiments, OutRo consistently improves performance across representative MLLMs on seven video QA benchmarks and demonstrates strong generalisation, while incurring only a 1.1x decoding overhead.
Problem

Research questions and friction points this paper is trying to address.

attention sink
decoding strategy
multimodal large language models
transformer architectures
inference behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

attention sink
OutRo
inference-time strategy
multimodal large language models
decoding enhancement
🔎 Similar Papers
No similar papers found.