RePo: Language Models with Context Re-Positioning

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLMs employ fixed linear positional encodings that lack semantic awareness, imposing extraneous cognitive load and consuming limited working memory resources—thereby hindering in-context learning. This work pioneers the integration of cognitive load theory into positional encoding design, proposing a differentiable context repositioning mechanism: it replaces predefined integer indices with a learnable mapping function (f_phi) that dynamically generates semantic-aware, nonlinear, and dense positional embeddings. We conduct continued pretraining on OLMo-2 1B and validate the mechanism via attention analysis and positional space visualization. Experiments demonstrate substantial performance gains on noisy-context, structured-data, and long-context tasks; notably, the method enhances attention to distant critical tokens while preserving competitiveness on short-context benchmarks.

Technology Category

Application Category

📝 Abstract
In-context learning is fundamental to modern Large Language Models (LLMs); however, prevailing architectures impose a rigid and fixed contextual structure by assigning linear or constant positional indices. Drawing on Cognitive Load Theory (CLT), we argue that this uninformative structure increases extraneous cognitive load, consuming finite working memory capacity that should be allocated to deep reasoning and attention allocation. To address this, we propose RePo, a novel mechanism that reduces extraneous load via context re-positioning. Unlike standard approaches, RePo utilizes a differentiable module, $f_φ$, to assign token positions that capture contextual dependencies, rather than replying on pre-defined integer range. By continually pre-training on the OLMo-2 1B backbone, we demonstrate that RePo significantly enhances performance on tasks involving noisy contexts, structured data, and longer context length, while maintaining competitive performance on general short-context tasks. Detailed analysis reveals that RePo successfully allocate higher attention to distant but relevant information, assign positions in dense and non-linear space, and capture the intrinsic structure of the input context. Our code is available at https://github.com/SakanaAI/repo.
Problem

Research questions and friction points this paper is trying to address.

Reduces extraneous cognitive load in LLMs
Enhances performance on noisy and structured data
Improves attention allocation for distant relevant information
Innovation

Methods, ideas, or system contributions that make the work stand out.

RePo assigns token positions via differentiable module
It reduces extraneous cognitive load in LLMs
Enhances performance on noisy and structured contexts
🔎 Similar Papers
No similar papers found.
Huayang Li
Huayang Li
Nara Institute of Science and Technology
Natural Language Processing
T
Tianyu Zhao
Sakana AI, Japan
R
Richard Sproat
Sakana AI, Japan