Looping Back to Move Forward: Recursive Transformers for Efficient and Flexible Large Multimodal Models

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the prevalent issue of low parameter utilization in large models during both training and inference, which often leads to a trade-off between performance and efficiency. To overcome this limitation, the authors propose a recursive Transformer architecture that reuses parameters recursively to progressively refine multimodal representations and enable on-demand computation—without increasing model size. The approach incorporates modality-specific projections and recursive connectors to effectively fuse cross-step features while preserving modality-specific characteristics. Additionally, a monotonic recursive loss function is introduced to guarantee consistent performance improvement with increasing recursion depth. Experimental results demonstrate that the proposed method outperforms the standard Transformer by 3% and improves upon baseline recursive models by 7%, achieving efficient, flexible, and high-performing multimodal modeling.

Technology Category

Application Category

📝 Abstract
Large Multimodal Models (LMMs) have achieved remarkable success in vision-language tasks, yet their vast parameter counts are often underutilized during both training and inference. In this work, we embrace the idea of looping back to move forward: reusing model parameters through recursive refinement to extract stronger multimodal representations without increasing model size. We propose RecursiveVLM, a recursive Transformer architecture tailored for LMMs. Two key innovations enable effective looping: (i) a Recursive Connector that aligns features across recursion steps by fusing intermediate-layer hidden states and applying modality-specific projections, respecting the distinct statistical structures of vision and language tokens; (ii) a Monotonic Recursion Loss that supervises every step and guarantees performance improves monotonically with recursion depth. This design transforms recursion into an on-demand refinement mechanism: delivering strong results with few loops on resource-constrained devices and progressively improving outputs when more computation resources are available. Experiments show consistent gains of +3% over standard Transformers and +7% over vanilla recursive baselines, demonstrating that strategic looping is a powerful path toward efficient, deployment-adaptive LMMs.
Problem

Research questions and friction points this paper is trying to address.

Large Multimodal Models
parameter underutilization
vision-language tasks
model efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recursive Transformer
Large Multimodal Models
Recursive Connector
Monotonic Recursion Loss
Parameter Reuse
🔎 Similar Papers
No similar papers found.
R
Ruihan Xu
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
Y
Yuting Gao
AntGroup
L
Lan Wang
AntGroup
J
Jianing Li
AntGroup
W
Weihao Chen
AntGroup
Qingpei Guo
Qingpei Guo
Ant Group
Multimodal LLMsVision-Language Models
M
Ming Yang
AntGroup
Shiliang Zhang
Shiliang Zhang
Department of Computer Science, School of EECS, Peking University
Multimedia Information RetrievalMultimedia SystemsVisual Search