🤖 AI Summary
This paper addresses core challenges in multimodal AI systems under open, uncertain environments—namely, weak perception, shallow reasoning, deficient planning, and poor generalization. To tackle these, it proposes a four-stage evolutionary paradigm: “Perception → Reasoning → Reflection → Planning,” systematically mapping the technical trajectory of multimodal reasoning in the large-model era. It introduces the novel concept of Native Large Multimodal Reasoning Models (N-LMRMs), emphasizing scalability, embodiment, and autonomous planning capability, integrated with Multimodal Chain-of-Thought (MCoT), cross-modal alignment, instruction tuning, and multimodal reinforcement learning. The work identifies three fundamental bottlenecks: full-modality generalization, deep reasoning, and agent-level behavioral intelligence; subsequently, it establishes a theoretical framework for N-LMRMs. Empirical validation on state-of-the-art systems—including OpenAI’s O3 and O4-mini—demonstrates the framework’s effectiveness.
📝 Abstract
Reasoning lies at the heart of intelligence, shaping the ability to make decisions, draw conclusions, and generalize across domains. In artificial intelligence, as systems increasingly operate in open, uncertain, and multimodal environments, reasoning becomes essential for enabling robust and adaptive behavior. Large Multimodal Reasoning Models (LMRMs) have emerged as a promising paradigm, integrating modalities such as text, images, audio, and video to support complex reasoning capabilities and aiming to achieve comprehensive perception, precise understanding, and deep reasoning. As research advances, multimodal reasoning has rapidly evolved from modular, perception-driven pipelines to unified, language-centric frameworks that offer more coherent cross-modal understanding. While instruction tuning and reinforcement learning have improved model reasoning, significant challenges remain in omni-modal generalization, reasoning depth, and agentic behavior. To address these issues, we present a comprehensive and structured survey of multimodal reasoning research, organized around a four-stage developmental roadmap that reflects the field's shifting design philosophies and emerging capabilities. First, we review early efforts based on task-specific modules, where reasoning was implicitly embedded across stages of representation, alignment, and fusion. Next, we examine recent approaches that unify reasoning into multimodal LLMs, with advances such as Multimodal Chain-of-Thought (MCoT) and multimodal reinforcement learning enabling richer and more structured reasoning chains. Finally, drawing on empirical insights from challenging benchmarks and experimental cases of OpenAI O3 and O4-mini, we discuss the conceptual direction of native large multimodal reasoning models (N-LMRMs), which aim to support scalable, agentic, and adaptive reasoning and planning in complex, real-world environments.