Tiny-R1V: Lightweight Multimodal Unified Reasoning Model via Model Merging

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low inference efficiency, weak cross-task generalization, and accuracy degradation of lightweight multimodal large language models (MLLMs), this paper proposes a two-stage unified inference framework. In Stage I, Length-Aware Implicit Policy Optimization (LIPO) is introduced, integrating dynamic response advantage modeling with gradient projection regularization to mitigate policy conflict in reinforcement learning. In Stage II, a training-free Adaptive Model Merging (AMM) mechanism is adopted for multi-task vector-weighted aggregation and robust ensemble. Evaluated on ten mainstream multimodal reasoning benchmarks—including mathematical reasoning, chart understanding, document analysis, and OCR—the method achieves significant improvements in both accuracy and inference speed while reducing token consumption. The core contribution lies in the first joint application of length-aware policy optimization and training-free model merging to lightweight MLLMs, simultaneously enhancing inference efficiency, model compactness, and cross-task consistency.

Technology Category

Application Category

📝 Abstract
Although Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities across diverse tasks, they encounter numerous challenges in terms of reasoning efficiency, such as large model size, overthinking, and compromised accuracy in lightweight scenarios. However, research on the reasoning capabilities of lightweight MLLMs is quite lacking. To this end, we propose Tiny-R1V, a novel lightweight 3B model that achieves faster inference and higher accuracy via a two-stage optimization, while unifying multimodal reasoning across multiple tasks and using fewer tokens. In the first stage, Tiny-R1V introduces Length-Informed Relative Policy Optimization (LIPO), a novel reinforcement learning method, to train each reasoning model. The LIPO is designed to dynamically adjusts advantages of responses within groups, that is, by prioritizing concise yet high-quality responses to encourage the generation of shorter and more accurate response. In the second stage, we propose Adaptive Model Merging (AMM), a training-free model merging method that merges multiple specialist models into a unified architecture. Specifically, AMM adaptively adjusts the weights of task vectors and robustly optimizes the merged vectors via a novel gradient projection regularization loss function, thus mitigating redundant conflicts between them. Extensive evaluations on ten widely-used reasoning benchmarks covering mathematics, structured data (charts, tables, documents), OCR, and general capabilities showcase the superior performance of Tiny-R1V, enabling lightweight models to excel in diverse multimodal reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing reasoning efficiency in lightweight multimodal large language models
Unifying diverse multimodal reasoning tasks within a compact 3B parameter model
Addressing accuracy and token efficiency challenges in lightweight reasoning scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage optimization for faster inference
LIPO method for concise accurate responses
AMM merges models via gradient projection
🔎 Similar Papers
No similar papers found.
Q
Qixiang Yin
Beijing University of Posts and Telecommunications
Huanjin Yao
Huanjin Yao
Tsinghua University
LLMMLLM
Jianghao Chen
Jianghao Chen
Institute of Automation, Chinese Academy of Sciences
Natural Language ProcessingLarge Language Models
J
Jiaxing Huang
Nanyang Technological University
Zhicheng Zhao
Zhicheng Zhao
Associate Professor at the School of Artificial Intelligence, Anhui University
Computer Vision
F
Fei Su
Beijing University of Posts and Telecommunications; Beijing Key Laboratory of Network System and Network Culture; Key Laboratory of Interactive Technology and Experience System, Ministry of Culture and Tourism