Learning When to Look: A Disentangled Curriculum for Strategic Perception in Multimodal Reasoning

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from degraded robustness in long-chain visual reasoning due to “visual forgetting,” stemming from premature coupling of abstract logical reasoning (“how to think”) and strategic visual perception (“when to look”) in current training paradigms—leading to foundational cold-start deficiencies and strategic perception deficits. Method: We propose a decoupled, stage-wise training framework: (1) text-prior supervised fine-tuning to solidify logical reasoning capabilities; (2) perception-anchored chain-of-thought (PG-CoT) to explicitly model temporal alignment between visual grounding and reasoning steps; and (3) a vision-timing reinforcement learning reward mechanism guided by linguistic uncertainty signals to enable autonomous learning of optimal visual sampling strategies. Results: Experiments demonstrate substantial improvements in long-chain reasoning robustness. Our approach advances MLLMs from heuristic observers to strategic, vision-grounded reasoners capable of adaptive, temporally aware perception.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) demonstrate significant potential but remain brittle in complex, long-chain visual reasoning tasks. A critical failure mode is "visual forgetting", where models progressively lose visual grounding as reasoning extends, a phenomenon aptly described as "think longer, see less". We posit this failure stems from current training paradigms prematurely entangling two distinct cognitive skills: (1) abstract logical reasoning "how-to-think") and (2) strategic visual perception ("when-to-look"). This creates a foundational cold-start deficiency -- weakening abstract reasoning -- and a strategic perception deficit, as models lack a policy for when to perceive. In this paper, we propose a novel curriculum-based framework to disentangle these skills. First, we introduce a disentangled Supervised Fine-Tuning (SFT) curriculum that builds a robust abstract reasoning backbone on text-only data before anchoring it to vision with a novel Perception-Grounded Chain-of-Thought (PG-CoT) paradigm. Second, we resolve the strategic perception deficit by formulating timing as a reinforcement learning problem. We design a Pivotal Perception Reward that teaches the model when to look by coupling perceptual actions to linguistic markers of cognitive uncertainty (e.g., "wait", "verify"), thereby learning an autonomous grounding policy. Our contributions include the formalization of these two deficiencies and the development of a principled, two-stage framework to address them, transforming the model from a heuristic-driven observer to a strategic, grounded reasoner. extbf{Code}: url{https://github.com/gaozilve-max/learning-when-to-look}.
Problem

Research questions and friction points this paper is trying to address.

Addresses visual forgetting in multimodal reasoning where models lose visual grounding.
Separates abstract reasoning from strategic visual perception timing in training.
Teaches models when to look using reinforcement learning and uncertainty markers.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangled curriculum separates reasoning and perception training
Perception-Grounded Chain-of-Thought anchors vision to abstract reasoning
Reinforcement learning with Pivotal Perception Reward teaches when to look
🔎 Similar Papers
No similar papers found.