🤖 AI Summary
Existing LLMs employ fixed linear positional encodings that lack semantic awareness, imposing extraneous cognitive load and consuming limited working memory resources—thereby hindering in-context learning. This work pioneers the integration of cognitive load theory into positional encoding design, proposing a differentiable context repositioning mechanism: it replaces predefined integer indices with a learnable mapping function (f_phi) that dynamically generates semantic-aware, nonlinear, and dense positional embeddings. We conduct continued pretraining on OLMo-2 1B and validate the mechanism via attention analysis and positional space visualization. Experiments demonstrate substantial performance gains on noisy-context, structured-data, and long-context tasks; notably, the method enhances attention to distant critical tokens while preserving competitiveness on short-context benchmarks.
📝 Abstract
In-context learning is fundamental to modern Large Language Models (LLMs); however, prevailing architectures impose a rigid and fixed contextual structure by assigning linear or constant positional indices. Drawing on Cognitive Load Theory (CLT), we argue that this uninformative structure increases extraneous cognitive load, consuming finite working memory capacity that should be allocated to deep reasoning and attention allocation. To address this, we propose RePo, a novel mechanism that reduces extraneous load via context re-positioning. Unlike standard approaches, RePo utilizes a differentiable module, $f_φ$, to assign token positions that capture contextual dependencies, rather than replying on pre-defined integer range. By continually pre-training on the OLMo-2 1B backbone, we demonstrate that RePo significantly enhances performance on tasks involving noisy contexts, structured data, and longer context length, while maintaining competitive performance on general short-context tasks. Detailed analysis reveals that RePo successfully allocate higher attention to distant but relevant information, assign positions in dense and non-linear space, and capture the intrinsic structure of the input context. Our code is available at https://github.com/SakanaAI/repo.