Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses parameter redundancy and high computational overhead in large language models (LLMs) by proposing Mixture-of-Recursions (MoR), a novel paradigm integrating parameter sharing with token-level adaptive computation. Built upon the recursive Transformer architecture, MoR employs a shared stack of recursive layers, lightweight dynamic routers, variable recursion depth allocation per token, and selective KV cache sharing—enabling fine-grained, intra-layer computational path customization. For the first time, MoR unifies parameter reuse and adaptive inference within a single framework. Experiments on models ranging from 135M to 1.7B parameters demonstrate consistent improvements: up to 2.1% reduction in perplexity, +1.8% average few-shot accuracy gain, and 1.9× higher inference throughput—while approaching a superior Pareto frontier. This advances efficient training and deployment of high-quality LLMs.

Technology Category

Application Category

📝 Abstract
Scaling language models unlocks impressive capabilities, but the accompanying computational and memory demands make both training and deployment expensive. Existing efficiency efforts typically target either parameter sharing or adaptive computation, leaving open the question of how to attain both simultaneously. We introduce Mixture-of-Recursions (MoR), a unified framework that combines the two axes of efficiency inside a single Recursive Transformer. MoR reuses a shared stack of layers across recursion steps to achieve parameter efficiency, while lightweight routers enable adaptive token-level thinking by dynamically assigning different recursion depths to individual tokens. This allows MoR to focus quadratic attention computation only among tokens still active at a given recursion depth, further improving memory access efficiency by selectively caching only their key-value pairs. Beyond these core mechanisms, we also propose a KV sharing variant that reuses KV pairs from the first recursion, specifically designed to decrease prefill latency and memory footprint. Across model scales ranging from 135M to 1.7B parameters, MoR forms a new Pareto frontier: at equal training FLOPs and smaller model sizes, it significantly lowers validation perplexity and improves few-shot accuracy, while delivering higher throughput compared with vanilla and existing recursive baselines. These gains demonstrate that MoR is an effective path towards large-model quality without incurring large-model cost.
Problem

Research questions and friction points this paper is trying to address.

Balancing parameter sharing and adaptive computation efficiency
Reducing computational and memory costs in language models
Improving token-level recursion depth dynamically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines parameter sharing and adaptive computation
Uses lightweight routers for token-level recursion
Selectively caches active tokens for efficiency
🔎 Similar Papers
No similar papers found.