DeepEyesV2: Toward Agentic Multimodal Model

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of constructing multimodal agents capable of actively invoking external tools—such as code execution and web search—to close the perception–reasoning–action loop. Methodologically, it introduces a two-stage training paradigm: first, supervised cold-start learning to establish foundational tool-calling capabilities; second, reinforcement learning to optimize task-adaptive tool selection and composition. Concurrently, the authors propose RealX-Bench—the first benchmark dedicated to evaluating multimodal tool invocation. Key contributions include: (1) the first open-source multimodal agent framework that tightly integrates image understanding with dynamic, context-aware tool invocation; (2) a stable and controllable two-stage training mechanism; and (3) state-of-the-art performance on RealX-Bench and other benchmarks, demonstrating robust support for complex, context-sensitive decision-making and multi-step tool coordination.

Technology Category

Application Category

📝 Abstract
Agentic multimodal models should not only comprehend text and images, but also actively invoke external tools, such as code execution environments and web search, and integrate these operations into reasoning. In this work, we introduce DeepEyesV2 and explore how to build an agentic multimodal model from the perspectives of data construction, training methods, and model evaluation. We observe that direct reinforcement learning alone fails to induce robust tool-use behavior. This phenomenon motivates a two-stage training pipeline: a cold-start stage to establish tool-use patterns, and reinforcement learning stage to further refine tool invocation. We curate a diverse, moderately challenging training dataset, specifically including examples where tool use is beneficial. We further introduce RealX-Bench, a comprehensive benchmark designed to evaluate real-world multimodal reasoning, which inherently requires the integration of multiple capabilities, including perception, search, and reasoning. We evaluate DeepEyesV2 on RealX-Bench and other representative benchmarks, demonstrating its effectiveness across real-world understanding, mathematical reasoning, and search-intensive tasks. Moreover, DeepEyesV2 exhibits task-adaptive tool invocation, tending to use image operations for perception tasks and numerical computations for reasoning tasks. Reinforcement learning further enables complex tool combinations and allows model to selectively invoke tools based on context. We hope our study can provide guidance for community in developing agentic multimodal models.
Problem

Research questions and friction points this paper is trying to address.

Developing agentic multimodal models that actively invoke external tools
Overcoming limitations of reinforcement learning for robust tool-use behavior
Creating comprehensive evaluation benchmarks for real-world multimodal reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage training pipeline with cold-start and reinforcement learning
Diverse dataset curation emphasizing beneficial tool use
RealX-Bench benchmark for real-world multimodal reasoning evaluation
🔎 Similar Papers
No similar papers found.
J
Jack Hong
Xiaohongshu Inc.
Chenxiao Zhao
Chenxiao Zhao
Tencent
C
ChengLin Zhu
Xiaohongshu Inc.
W
Weiheng Lu
Xiaohongshu Inc.
Guohai Xu
Guohai Xu
Xiaohongshu Inc., Alibaba DAMO Academy
MLLMAlignment
X
Xing Yu
Xiaohongshu Inc.