🤖 AI Summary
This work addresses the performance degradation and training instability commonly observed in multimodal large language models during multi-turn reasoning, which often stems from misalignment between textual reasoning and visual actions. To bridge this semantic gap, the paper proposes Multimodal Agent Policy Optimization (MAPO), a novel approach that integrates semantic alignment between visual actions and textual reasoning into the advantage estimation of reinforcement learning for the first time. MAPO further enforces the model to generate explicit textual descriptions of visual content obtained through tool invocations. This design effectively reduces the variance of policy gradients while enhancing coherence between perception and reasoning. Extensive experiments across multiple visual reasoning benchmarks demonstrate that MAPO significantly outperforms existing methods, validating its effectiveness in improving multimodal reasoning capabilities.
📝 Abstract
Recent advancements in Multimodal Large Language Models (MLLMs) have incentivized models to ``think with images'' by actively invoking visual tools during multi-turn reasoning. The common Reinforcement Learning (RL) practice of relying on outcome-based rewards ignores the fact that textual plausibility often masks executive failure, meaning that models may exhibit intuitive textual reasoning while executing imprecise or irrelevant visual actions within their agentic reasoning trajectories. This reasoning-action discrepancy introduces noise that accumulates throughout the multi-turn reasoning process, severely degrading the model's multimodal reasoning capabilities and potentially leading to training collapse. In this paper, we introduce Multimodal Agentic Policy Optimization (MAPO), bridging the gap between textual reasoning and visual actions generated by models within their Multimodal Chain-of-Thought (MCoT). Specifically, MAPO mandates the model to generate explicit textual descriptions for the visual content obtained via tool usage. We then employ a novel advantage estimation that couples the semantic alignment between these descriptions and the actual observations with the task reward. Theoretical findings are provided to justify the rationale behind MAPO, which inherently reduces the variance of gradients, and extensive experiments demonstrate that our method achieves superior performance across multiple visual reasoning benchmarks.