ATP-Bench: Towards Agentic Tool Planning for MLLM Interleaved Generation

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing multimodal large language models (MLLMs) struggle to balance factual accuracy and creativity in interleaved text-and-image generation, while lacking a unified agent-based tool-planning mechanism. To bridge this gap, we propose the Agentic Tool Planning (ATP) paradigm, enabling MLLMs to autonomously decide when, where, and how to invoke tools for generating coherent multimodal responses. Our key contributions include the first agent-centric tool-planning framework tailored for interleaved text-and-image generation, the release of ATP-Bench—a high-quality, multi-intent, human-verified benchmark—and the design of MAM, a reference-free multi-agent evaluation system. Experiments across ten state-of-the-art MLLMs reveal significant deficiencies in coherent planning and tool coordination, offering clear directions and actionable insights for future research.
📝 Abstract
Interleaved text-and-image generation represents a significant frontier for Multimodal Large Language Models (MLLMs), offering a more intuitive way to convey complex information. Current paradigms rely on either image generation or retrieval augmentation, yet they typically treat the two as mutually exclusive paths, failing to unify factuality with creativity. We argue that the next milestone in this field is Agentic Tool Planning, where the model serves as a central controller that autonomously determines when, where, and which tools to invoke to produce interleaved responses for visual-critical queries. To systematically evaluate this paradigm, we introduce ATP-Bench, a novel benchmark comprising 7,702 QA pairs (including 1,592 VQA pairs) across eight categories and 25 visual-critical intents, featuring human-verified queries and ground truths. Furthermore, to evaluate agentic planning independent of end-to-end execution and changing tool backends, we propose a Multi-Agent MLLM-as-a-Judge (MAM) system. MAM evaluates tool-call precision, identifies missed opportunities for tool use, and assesses overall response quality without requiring ground-truth references. Our extensive experiments on 10 state-of-the-art MLLMs reveal that models struggle with coherent interleaved planning and exhibit significant variations in tool-use behavior, highlighting substantial room for improvement and providing actionable guidance for advancing interleaved generation. Dataset and code are available at https://github.com/Qwen-Applications/ATP-Bench.
Problem

Research questions and friction points this paper is trying to address.

Interleaved Generation
Multimodal Large Language Models
Agentic Tool Planning
Visual-Critical Queries
Tool Use
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic Tool Planning
Interleaved Generation
Multimodal Large Language Models
ATP-Bench
MLLM-as-a-Judge
🔎 Similar Papers
No similar papers found.
Yinuo Liu
Yinuo Liu
Huazhong University of Science and Technology
AI securityMultimodal LLM
Z
Zi Qian
Qwen Large Model Application Team, Alibaba
Heng Zhou
Heng Zhou
Jiangnan University
Multi-modal LearningImage ProcessingComputer VisionRemote Sensing
J
Jiahao Zhang
Qwen Large Model Application Team, Alibaba
Y
Yajie Zhang
Qwen Large Model Application Team, Alibaba
Zhihang Li
Zhihang Li
Kwai Inc
Computer VisionGenerative modelvideo/image generationLLM
Mengyu Zhou
Mengyu Zhou
Microsoft Research
Data analyticsNatural Language ProcessingNetwork ScienceHuman BehaviorsMobile & Ubiquitous Computing
E
Erchao Zhao
Qwen Large Model Application Team, Alibaba
X
Xiaoxi Jiang
Qwen Large Model Application Team, Alibaba
G
Guanjun Jiang
Qwen Large Model Application Team, Alibaba