MMhops-R1: Multimodal Multi-hop Reasoning

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) are limited by their inability to perform multi-hop reasoning, primarily due to the absence of benchmarks supporting rigorous evaluation of such capabilities. To address this gap, we introduce MMhops—the first large-scale multimodal multi-hop reasoning benchmark—featuring two task categories: *Bridging*, requiring cross-modal coordination and external knowledge chaining, and *Comparison*, demanding nuanced relational inference across modalities. To tackle these challenges, we propose a dynamic programming–inspired multimodal Retrieval-Augmented Generation (RAG) framework. It integrates reinforcement learning (Proximal Policy Optimization, PPO) for autonomous reasoning path planning, targeted query generation, and hierarchical information fusion, while incorporating cross-modal alignment and dynamic knowledge retrieval–synthesis mechanisms. On MMhops, our method significantly outperforms strong baselines; moreover, it demonstrates strong generalization on fixed-hop tasks, validating the robustness and transferability of our dynamic programming paradigm.

Technology Category

Application Category

📝 Abstract
The ability to perform multi-modal multi-hop reasoning by iteratively integrating information across various modalities and external knowledge is critical for addressing complex real-world challenges. However, existing Multi-modal Large Language Models (MLLMs) are predominantly limited to single-step reasoning, as existing benchmarks lack the complexity needed to evaluate and drive multi-hop abilities. To bridge this gap, we introduce MMhops, a novel, large-scale benchmark designed to systematically evaluate and foster multi-modal multi-hop reasoning. MMhops dataset comprises two challenging task formats, Bridging and Comparison, which necessitate that models dynamically construct complex reasoning chains by integrating external knowledge. To tackle the challenges posed by MMhops, we propose MMhops-R1, a novel multi-modal Retrieval-Augmented Generation (mRAG) framework for dynamic reasoning. Our framework utilizes reinforcement learning to optimize the model for autonomously planning reasoning paths, formulating targeted queries, and synthesizing multi-level information. Comprehensive experiments demonstrate that MMhops-R1 significantly outperforms strong baselines on MMhops, highlighting that dynamic planning and multi-modal knowledge integration are crucial for complex reasoning. Moreover, MMhops-R1 demonstrates strong generalization to tasks requiring fixed-hop reasoning, underscoring the robustness of our dynamic planning approach. In conclusion, our work contributes a challenging new benchmark and a powerful baseline model, and we will release the associated code, data, and weights to catalyze future research in this critical area.
Problem

Research questions and friction points this paper is trying to address.

Existing MLLMs lack multi-modal multi-hop reasoning capabilities
Current benchmarks cannot evaluate complex multi-hop reasoning
Models need dynamic planning for multi-modal knowledge integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal retrieval-augmented generation framework for dynamic reasoning
Reinforcement learning optimizes autonomous reasoning path planning
Dynamic planning integrates multi-level multimodal knowledge
🔎 Similar Papers
No similar papers found.
T
Tao Zhang
State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA
Z
Ziqi Zhang
State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA
Z
Zongyang Ma
State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA
Y
Yuxin Chen
Tencent Inc.
B
Bing Li
State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA
Chunfeng Yuan
Chunfeng Yuan
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
computer visionPattern RecognitionMachine LearningHuman Action RecognitionSparse Representation
Guangting Wang
Guangting Wang
University of Science and Technology of China
Computer vision
F
Fengyun Rao
Tencent Inc.
Ying Shan
Ying Shan
Distinguished Scientist at Tencent, Director of ARC Lab & AI Lab CVC
Deep learningcomputer visionmachine learningpaid searchdisplay ads
W
Weiming Hu
State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA