🤖 AI Summary
This work proposes LoopFormer, a novel architecture addressing the inflexibility of conventional cyclic Transformers with fixed iteration counts, which struggle to adapt to varying computational budgets during inference. LoopFormer is trained on variable-length recurrence trajectories and incorporates shortcut consistency training alongside budget-conditioned modulation to ensure coherent and progressively refined representations across different inference depths. By integrating an elastic-depth design with trajectory alignment techniques, the model dynamically adjusts its inference steps according to available computational resources. Experimental results demonstrate that LoopFormer maintains robust performance under stringent computational constraints in language modeling and reasoning tasks, while also enabling smooth performance gains as the budget increases.
📝 Abstract
Looped Transformers have emerged as an efficient and powerful class of models for reasoning in the language domain. Recent studies show that these models achieve strong performance on algorithmic and reasoning tasks, suggesting that looped architectures possess an inductive bias toward latent reasoning. However, prior approaches fix the number of loop iterations during training and inference, leaving open the question of whether these models can flexibly adapt their computational depth under variable compute budgets. We introduce LoopFormer, a looped Transformer trained on variable-length trajectories to enable budget-conditioned reasoning. Our core contribution is a shortcut-consistency training scheme that aligns trajectories of different lengths, ensuring that shorter loops yield informative representations while longer loops continue to refine them. LoopFormer conditions each loop on the current time and step size, enabling representations to evolve consistently across trajectories of varying length rather than drifting or stagnating. Empirically, LoopFormer demonstrates robust performance on language modeling and reasoning benchmarks even under aggressive compute constraints, while scaling gracefully with additional budget. These results show that looped Transformers are inherently suited for adaptive language modeling, opening a path toward controllable and budget-aware large language models.