Inner Loop Inference for Pretrained Transformers: Unlocking Latent Capabilities Without Training

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a training-free, inference-time inner-loop mechanism to further unlock the reasoning potential of pretrained Transformers without retraining. By iteratively reapplying selected modules during inference, the method extends the refinement of internal representations, leveraging the residual architecture of Transformers to enable module reuse and continuous optimization along latent variable trajectories. Experimental results demonstrate consistent and robust performance gains across multiple benchmarks, accompanied by smoother evolution of hidden states and sustained semantic refinement. These findings reveal substantial untapped iterative optimization capacity within frozen pretrained models, suggesting that their reasoning capabilities can be significantly enhanced through dynamic inference strategies alone.

Technology Category

Application Category

📝 Abstract
Deep Learning architectures, and in particular Transformers, are conventionally viewed as a composition of layers. These layers are actually often obtained as the sum of two contributions: a residual path that copies the input and the output of a Transformer block. As a consequence, the inner representations (i.e. the input of these blocks) can be interpreted as iterative refinement of a propagated latent representation. Under this lens, many works suggest that the inner space is shared across layers, meaning that tokens can be decoded at early stages. Mechanistic interpretability even goes further by conjecturing that some layers act as refinement layers. Following this path, we propose inference-time inner looping, which prolongs refinement in pretrained off-the-shelf language models by repeatedly re-applying a selected block range. Across multiple benchmarks, inner looping yields modest but consistent accuracy improvements. Analyses of the resulting latent trajectories suggest more stable state evolution and continued semantic refinement. Overall, our results suggest that additional refinement can be obtained through simple test-time looping, extending computation in frozen pretrained models.
Problem

Research questions and friction points this paper is trying to address.

Inner Loop Inference
Pretrained Transformers
Latent Refinement
Zero-shot Inference
Frozen Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

inner loop inference
pretrained Transformers
iterative refinement
residual path
frozen models
🔎 Similar Papers
No similar papers found.
J
Jonathan Lys
IMT Atlantique, Lab-STICC, UMR CNRS 6285, Brest, France
Vincent Gripon
Vincent Gripon
IMT Atlantique and Lab-STICC
Deep LearningFew-Shot LearningArtificial Intelligence
Bastien Pasdeloup
Bastien Pasdeloup
IMT Atlantique
Signal processing on graphs
Lukas Mauch
Lukas Mauch
Sony Europe B.V.
machine learningsignal processing
F
Fabien Cardinaux
Sony Europe Ltd., Stuttgart Technology Center, EUREC, Germany
G
Ghouthi Boukli Hacene
Sony Europe Ltd., Stuttgart Technology Center, EUREC, Germany