MMAT-1M: A Large Reasoning Dataset for Multimodal Agent Tuning

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from a scarcity of large-scale, high-quality agent fine-tuning datasets. To address this, we introduce MM-Agent—the first million-scale multimodal agent fine-tuning dataset—and propose a novel four-stage data engine integrating GPT-4o, retrieval-augmented generation (RAG), dynamic API invocation, and multi-turn reflection to synthesize high-quality reasoning data featuring chain-of-thought reasoning, self-correction, and tool-use capabilities. Crucially, we innovatively compress multi-turn dialogues into highly consistent single-turn formats, enabling scalable data distillation and efficient training. Evaluated on eight public benchmarks, our approach achieves an average improvement of 2.7%, with a substantial 8.8% gain on the Dyn-VQA benchmark—demonstrating significant advances in multi-step reasoning and tool-augmented multimodal reasoning.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs), enhanced through agent tuning, have demonstrated remarkable capabilities in Chain-of-Thought (CoT) and tool utilization, significantly surpassing the performance of standalone models. However, the multimodal domain still lacks a large-scale, high-quality agent tuning dataset to unlock the full potential of multimodal large language models. To bridge this gap, we introduce MMAT-1M, the first million-scale multimodal agent tuning dataset designed to support CoT, reflection, and dynamic tool usage. Our dataset is constructed through a novel four-stage data engine: 1) We first curate publicly available multimodal datasets containing question-answer pairs; 2) Then, leveraging GPT-4o, we generate rationales for the original question-answer pairs and dynamically integrate API calls and Retrieval Augmented Generation (RAG) information through a multi-turn paradigm; 3) Furthermore, we refine the rationales through reflection to ensure logical consistency and accuracy, creating a multi-turn dialogue dataset with both Rationale and Reflection (RR); 4) Finally, to enhance efficiency, we optionally compress multi-turn dialogues into a One-turn Rationale and Reflection (ORR) format. By fine-tuning open-source multimodal models on the MMAT-1M, we observe significant performance gains. For instance, the InternVL2.5-8B-RR model achieves an average improvement of 2.7% across eight public benchmarks and 8.8% on the RAG benchmark Dyn-VQA, demonstrating the dataset's effectiveness in enhancing multimodal reasoning and tool-based capabilities. The dataset is publicly available at https://github.com/VIS-MPU-Agent/MMAT-1M.
Problem

Research questions and friction points this paper is trying to address.

Lack of large-scale multimodal agent tuning dataset
Need to enhance multimodal reasoning and tool usage
Improving logical consistency and accuracy in rationales
Innovation

Methods, ideas, or system contributions that make the work stand out.

MMAT-1M: million-scale multimodal agent tuning dataset
Four-stage data engine for dataset construction
Enhances reasoning via CoT, reflection, and tool usage
🔎 Similar Papers
No similar papers found.
T
Tianhong Gao
Baidu Inc.
Y
Yannian Fu
Baidu Inc.
W
Weiqun Wu
Baidu Inc.
Haixiao Yue
Haixiao Yue
Baidu Inc.
Face RecognitionFace Anti-SpoofingObject Detection
S
Shanshan Liu
Baidu Inc.
Gang Zhang
Gang Zhang
Tsinghua University
computer vision