🤖 AI Summary
Self-attention mechanisms often allocate excessive computational resources to redundant or noisy contextual tokens, resulting in inefficiency. Existing differential attention methods rely on symmetric signal/noise head allocation, limiting modeling flexibility and scalability. This paper proposes Grouped Differential Attention (GDA), which enables fine-grained signal-noise separation via asymmetric head grouping—assigning more heads to the signal group and fewer to the noise group. GDA introduces three key innovations: (1) an asymmetric grouping mechanism, (2) a selective expansion strategy, and (3) group-differentiated head growth. These jointly enhance noise suppression while preserving signal fidelity. Extensive large-scale pretraining experiments demonstrate that moderate head imbalance (e.g., 3:1 signal-to-noise ratio) significantly improves generalization and training stability, with negligible additional computational overhead. Moreover, GDA inherently supports scalable model expansion without architectural modification.
📝 Abstract
The self-attention mechanism, while foundational to modern Transformer architectures, suffers from a critical inefficiency: it frequently allocates substantial attention to redundant or noisy context. Differential Attention addressed this by using subtractive attention maps for signal and noise, but its required balanced head allocation imposes rigid constraints on representational flexibility and scalability.
To overcome this, we propose Grouped Differential Attention (GDA), a novel approach that introduces unbalanced head allocation between signal-preserving and noise-control groups. GDA significantly enhances signal focus by strategically assigning more heads to signal extraction and fewer to noise-control, stabilizing the latter through controlled repetition (akin to GQA). This design achieves stronger signal fidelity with minimal computational overhead. We further extend this principle to group-differentiated growth, a scalable strategy that selectively replicates only the signal-focused heads, thereby ensuring efficient capacity expansion.
Through large-scale pretraining and continual training experiments, we demonstrate that moderate imbalance ratios in GDA yield substantial improvements in generalization and stability compared to symmetric baselines. Our results collectively establish that ratio-aware head allocation and selective expansion offer an effective and practical path toward designing scalable, computation-efficient Transformer architectures.