Layer-wise Positional Bias in Short-Context Language Modeling

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Language models exhibit position-based preferences in input processing that are unrelated to semantics, yet the mechanisms governing their evolution across network layers and their dependence on task complexity remain unclear. This work proposes an attribution-based analytical framework that integrates a sliding window approach with layer conductance to fine-grainedly quantify, in short-context language models, the importance assigned by each layer to different input positions, thereby constructing layer-wise positional importance profiles. The study reveals, for the first time, systematic patterns in the inter-layer evolution of positional bias: it is architecture-specific, stable across inputs, and invariant to word shuffling; recency bias intensifies while primacy bias diminishes with increasing depth; early layers distinctly differentiate content words from function words, a capability largely lost in later layers.

Technology Category

Application Category

📝 Abstract
Language models often show a preference for using information from specific positions in the input regardless of semantic relevance. While positional bias has been studied in various contexts, from attention sinks to task performance degradation in long-context settings, prior work has not established how these biases evolve across individual layers and input positions, or how they vary independent of task complexity. We introduce an attribution-based framework to analyze positional effects in short-context language modeling. Using layer conductance with a sliding-window approach, we quantify how each layer distributes importance across input positions, yielding layer-wise positional importance profiles. We find that these profiles are architecture-specific, stable across inputs, and invariant to lexical scrambling. Characterizing these profiles, we find prominent recency bias that increases with depth and subtle primacy bias that diminishes through model depth. Beyond positional structure, we also show that early layers preferentially weight content words over function words across all positions, while later layers lose this word-type differentiation.
Problem

Research questions and friction points this paper is trying to address.

positional bias
layer-wise analysis
short-context language modeling
recency bias
primacy bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

layer-wise positional bias
attribution-based analysis
layer conductance
recency bias
short-context language modeling
🔎 Similar Papers
No similar papers found.
M
Maryam Rahimi
Tehran Institute for Advanced Studies, Khatam University, Iran
M
Mahdi Nouri
School of Electrical and Computer Engineering, University of Tehran, Iran
Yadollah Yaghoobzadeh
Yadollah Yaghoobzadeh
University of Tehran / TeIAS
Natural Language ProcessingDeep Learning