Reinforcement Fine-Tuning Powers Reasoning Capability of Multimodal Large Language Models

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement fine-tuning (RFT) remains underexplored for systematically enhancing complex reasoning in multimodal large language models (MLLMs), hindering progress toward artificial general intelligence (AGI). Method: We propose the first five-dimensional analytical framework for RFT-enabled MLLM reasoning—encompassing multimodal fusion, cross-task generalization, algorithmic optimization, benchmark construction, and modular engineering. Our approach integrates PPO/GRPO, chain-of-thought distillation, dynamic reward modeling, and multimodal alignment training to develop open-source frameworks including LLaVA-RL and Qwen-VL-RFT. Contribution/Results: We achieve significant performance gains on MMMU, MME-Reasoning, and V*GQA benchmarks. This work provides the first systematic survey of RL-based reasoning in MLLMs, releases the field’s inaugural curated resource repository, and establishes a novel paradigm for AGI-oriented multimodal reasoning.

Technology Category

Application Category

📝 Abstract
Standing in 2025, at a critical juncture in the pursuit of Artificial General Intelligence (AGI), reinforcement fine-tuning (RFT) has demonstrated significant potential in enhancing the reasoning capability of large language models (LLMs) and has led to the development of cutting-edge AI models such as OpenAI-o1 and DeepSeek-R1. Moreover, the efficient application of RFT to enhance the reasoning capability of multimodal large language models (MLLMs) has attracted widespread attention from the community. In this position paper, we argue that reinforcement fine-tuning powers the reasoning capability of multimodal large language models. To begin with, we provide a detailed introduction to the fundamental background knowledge that researchers interested in this field should be familiar with. Furthermore, we meticulously summarize the improvements of RFT in powering reasoning capability of MLLMs into five key points: diverse modalities, diverse tasks and domains, better training algorithms, abundant benchmarks and thriving engineering frameworks. Finally, we propose five promising directions for future research that the community might consider. We hope that this position paper will provide valuable insights to the community at this pivotal stage in the advancement toward AGI. Summary of works done on RFT for MLLMs is available at https://github.com/Sun-Haoyuan23/Awesome-RL-based-Reasoning-MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Enhancing reasoning in multimodal LLMs via reinforcement fine-tuning
Improving MLLMs across diverse tasks and modalities
Advancing AGI through optimized RFT training algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement fine-tuning enhances MLLMs reasoning
Diverse modalities and tasks improve RFT
Better algorithms and benchmarks boost performance
🔎 Similar Papers
No similar papers found.
H
Haoyuan Sun
Tsinghua Shenzhen International Graduate School, Tsinghua University
J
Jiaqi Wu
Tsinghua Shenzhen International Graduate School, Tsinghua University
Bo Xia
Bo Xia
Professor of Construction Management, Queensland University of Technology
Construction ManagementSustainable BuildingDesign-buildHousing for older people
Y
Yifu Luo
Tsinghua Shenzhen International Graduate School, Tsinghua University
Yifei Zhao
Yifei Zhao
上海科技大学
K
Kai Qin
Tsinghua Shenzhen International Graduate School, Tsinghua University
X
Xufei Lv
Tsinghua Shenzhen International Graduate School, Tsinghua University
Tiantian Zhang
Tiantian Zhang
Tsinghua University
Reinforcement LearningClusteringData Mining
Yongzhe Chang
Yongzhe Chang
UNSW/Data 61 PhD, Tsinghua postdoc.
machine learningreinforcement learning
Xueqian Wang
Xueqian Wang
Tsinghua University
Information FusionTarget DetectionRadar ImagingImage Processing