Learning Deliberately, Acting Intuitively: Unlocking Test-Time Reasoning in Multimodal LLMs

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) face dual challenges: insufficient cross-modal alignment and weak reasoning generalization, coupled with high training costs due to reliance on human annotations and complex reward modeling. To address these, we propose the “Deep-at-Training, Intuitive-at-Testing” (D2I) framework—the first to decouple deep reasoning during training from flexible output generation at inference, using only lightweight, rule-based format rewards without additional annotations or explicit reward modeling. D2I integrates multi-stage reasoning strategy switching and implicit capability transfer, enabling models to internalize transferable multimodal reasoning skills. Extensive evaluation across diverse cross-domain benchmarks demonstrates significant improvements over state-of-the-art methods, validating D2I’s effectiveness in modality alignment, zero-shot transfer, and reasoning generalization, while ensuring scalability and training efficiency.

Technology Category

Application Category

📝 Abstract
Reasoning is a key capability for large language models (LLMs), particularly when applied to complex tasks such as mathematical problem solving. However, multimodal reasoning research still requires further exploration of modality alignment and training costs. Many of these approaches rely on additional data annotation and relevant rule-based rewards to enhance the understanding and reasoning ability, which significantly increases training costs and limits scalability. To address these challenges, we propose the Deliberate-to-Intuitive reasoning framework (D2I) that improves the understanding and reasoning ability of multimodal LLMs (MLLMs) without extra annotations and complex rewards. Specifically, our method sets deliberate reasoning strategies to enhance modality alignment only through the rule-based format reward during training. While evaluating, the reasoning style shifts to intuitive, which removes deliberate reasoning strategies during training and implicitly reflects the model's acquired abilities in the response. D2I outperforms baselines across both in-domain and out-of-domain benchmarks. Our findings highlight the role of format reward in fostering transferable reasoning skills in MLLMs, and inspire directions for decoupling training-time reasoning depth from test-time response flexibility.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multimodal LLMs reasoning without extra annotations
Reducing training costs for modality alignment in MLLMs
Decoupling training reasoning depth from test-time flexibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

D2I framework enhances MLLMs without extra annotations
Uses rule-based format reward for modality alignment
Shifts from deliberate to intuitive reasoning during evaluation
🔎 Similar Papers
No similar papers found.