🤖 AI Summary
This work investigates how multi-head latent attention (MLA) affects internal representational capacity during Transformer pretraining. Methodologically, we leverage random matrix theory—specifically the Marchenko–Pastur law—to diagnose the spectral distribution of the $W_Q W_K^ op$ Gram matrix, enabling systematic comparison among standard multi-head attention (MHA), MLA-PreRoPE, and our newly proposed MLA-Decoupled architecture. We identify a layer-wise locality in capacity bottlenecks and reveal that compression and rotational operations critically govern spectral stability. MLA-Decoupled explicitly separates positional rotation from content modeling, thereby effectively suppressing outlier eigenvalues, preventing rank collapse, and mitigating spectral fragmentation—maintaining broad, contiguous spectral support across all layers. Empirically, this design significantly improves representation balance and training stability. Our analysis provides both theoretical grounding and a structural paradigm for designing efficient attention mechanisms.
📝 Abstract
In this work, we study how multi-head latent attention (MLA), a popular strategy for compressing key/value memory, affects a transformer's internal capacity during pretraining. Using a lightweight suite of Marchenko-Pastur (MP) diagnostics, we analyze the spectrum of the $W_{Q}W_{K}^ op$ gram matrix throughout training, comparing three variants: the standard multi-head attention (MHA) baseline, MLA-PreRoPE with rotary applied before compression, and MLA-Decoupled, which shares a single rotary sub-vector across all heads. Our random matrix analysis reveals extbf{three key findings:} extbf{ i)} capacity bottlenecks emerge locally: both MHA and MLA-PreRoPE exhibit sharp, early spikes in specific layers that persist and propagate, disrupting the balance between bulk and outlier directions; extbf{ ii)} these spikes coincide with rank collapse, concentrating the model's expressivity into narrow subspaces; extbf{ iii)} only the decoupled variant prevents this cascade, maintaining broad spectral support and suppressing outlier formation across layers. These results underscore that emph{how} rotary embeddings are applied is just as critical as emph{where} compression occurs. Sharing rotary components across heads mitigates spectral fragmentation and preserves representational capacity.