🤖 AI Summary
Reinforcement fine-tuning (RFT) remains underexplored for systematically enhancing complex reasoning in multimodal large language models (MLLMs), hindering progress toward artificial general intelligence (AGI).
Method: We propose the first five-dimensional analytical framework for RFT-enabled MLLM reasoning—encompassing multimodal fusion, cross-task generalization, algorithmic optimization, benchmark construction, and modular engineering. Our approach integrates PPO/GRPO, chain-of-thought distillation, dynamic reward modeling, and multimodal alignment training to develop open-source frameworks including LLaVA-RL and Qwen-VL-RFT.
Contribution/Results: We achieve significant performance gains on MMMU, MME-Reasoning, and V*GQA benchmarks. This work provides the first systematic survey of RL-based reasoning in MLLMs, releases the field’s inaugural curated resource repository, and establishes a novel paradigm for AGI-oriented multimodal reasoning.
📝 Abstract
Standing in 2025, at a critical juncture in the pursuit of Artificial General Intelligence (AGI), reinforcement fine-tuning (RFT) has demonstrated significant potential in enhancing the reasoning capability of large language models (LLMs) and has led to the development of cutting-edge AI models such as OpenAI-o1 and DeepSeek-R1. Moreover, the efficient application of RFT to enhance the reasoning capability of multimodal large language models (MLLMs) has attracted widespread attention from the community. In this position paper, we argue that reinforcement fine-tuning powers the reasoning capability of multimodal large language models. To begin with, we provide a detailed introduction to the fundamental background knowledge that researchers interested in this field should be familiar with. Furthermore, we meticulously summarize the improvements of RFT in powering reasoning capability of MLLMs into five key points: diverse modalities, diverse tasks and domains, better training algorithms, abundant benchmarks and thriving engineering frameworks. Finally, we propose five promising directions for future research that the community might consider. We hope that this position paper will provide valuable insights to the community at this pivotal stage in the advancement toward AGI. Summary of works done on RFT for MLLMs is available at https://github.com/Sun-Haoyuan23/Awesome-RL-based-Reasoning-MLLMs.