🤖 AI Summary
Language models exhibit position-based preferences in input processing that are unrelated to semantics, yet the mechanisms governing their evolution across network layers and their dependence on task complexity remain unclear. This work proposes an attribution-based analytical framework that integrates a sliding window approach with layer conductance to fine-grainedly quantify, in short-context language models, the importance assigned by each layer to different input positions, thereby constructing layer-wise positional importance profiles. The study reveals, for the first time, systematic patterns in the inter-layer evolution of positional bias: it is architecture-specific, stable across inputs, and invariant to word shuffling; recency bias intensifies while primacy bias diminishes with increasing depth; early layers distinctly differentiate content words from function words, a capability largely lost in later layers.
📝 Abstract
Language models often show a preference for using information from specific positions in the input regardless of semantic relevance. While positional bias has been studied in various contexts, from attention sinks to task performance degradation in long-context settings, prior work has not established how these biases evolve across individual layers and input positions, or how they vary independent of task complexity. We introduce an attribution-based framework to analyze positional effects in short-context language modeling. Using layer conductance with a sliding-window approach, we quantify how each layer distributes importance across input positions, yielding layer-wise positional importance profiles. We find that these profiles are architecture-specific, stable across inputs, and invariant to lexical scrambling. Characterizing these profiles, we find prominent recency bias that increases with depth and subtle primacy bias that diminishes through model depth. Beyond positional structure, we also show that early layers preferentially weight content words over function words across all positions, while later layers lose this word-type differentiation.