Learning When Not to Attend Globally

📅 2025-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the computational redundancy and inefficiency of global attention in large language models (LLMs), this paper proposes All-or-Here Attention (AHA), a dynamic attention mechanism that determines—per token—whether global contextual modeling is necessary; otherwise, it resorts to local sliding-window attention. Our key contributions are: (i) the first empirical identification of a long-tail distribution in contextual dependency lengths across tokens; and (ii) the design of head-wise binary routing gates that enable fine-grained, on-demand switching between full attention and windowed attention (window size = 256). Experiments demonstrate that AHA replaces 93% of full-attention computations with no performance degradation, maintains state-of-the-art results across multiple long-context benchmarks (e.g., Needle-in-a-Haystack, LongBench), and significantly improves inference efficiency and scalability.

Technology Category

Application Category

📝 Abstract
When reading books, humans focus primarily on the current page, flipping back to recap prior context only when necessary. Similarly, we demonstrate that Large Language Models (LLMs) can learn to dynamically determine when to attend to global context. We propose All-or-Here Attention (AHA), which utilizes a binary router per attention head to dynamically toggle between full attention and local sliding window attention for each token. Our results indicate that with a window size of 256 tokens, up to 93% of the original full attention operations can be replaced by sliding window attention without performance loss. Furthermore, by evaluating AHA across various window sizes, we identify a long-tail distribution in context dependency, where the necessity for full attention decays rapidly as the local window expands. By decoupling local processing from global access, AHA reveals that full attention is largely redundant, and that efficient inference requires only on-demand access to the global context.
Problem

Research questions and friction points this paper is trying to address.

Dynamic global context attention in LLMs
Replace full attention with sliding window
Reduce redundancy in attention operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic binary router per attention head
Toggle between full and local sliding window attention
On-demand global context access reduces redundancy
X
Xuan Luo
Department of Computer Science, UC Santa Barbara
K
Kailai Zhang
Department of Computer Science, UC Santa Barbara
Xifeng Yan
Xifeng Yan
Professor, Computer Science, Univ. of California at Santa Barbara
Artificial IntelligenceData Mining