🤖 AI Summary
This work addresses the challenge that existing multimodal large language models (MLLMs) struggle to balance factual accuracy and creativity in interleaved text-and-image generation, while lacking a unified agent-based tool-planning mechanism. To bridge this gap, we propose the Agentic Tool Planning (ATP) paradigm, enabling MLLMs to autonomously decide when, where, and how to invoke tools for generating coherent multimodal responses. Our key contributions include the first agent-centric tool-planning framework tailored for interleaved text-and-image generation, the release of ATP-Bench—a high-quality, multi-intent, human-verified benchmark—and the design of MAM, a reference-free multi-agent evaluation system. Experiments across ten state-of-the-art MLLMs reveal significant deficiencies in coherent planning and tool coordination, offering clear directions and actionable insights for future research.
📝 Abstract
Interleaved text-and-image generation represents a significant frontier for Multimodal Large Language Models (MLLMs), offering a more intuitive way to convey complex information. Current paradigms rely on either image generation or retrieval augmentation, yet they typically treat the two as mutually exclusive paths, failing to unify factuality with creativity. We argue that the next milestone in this field is Agentic Tool Planning, where the model serves as a central controller that autonomously determines when, where, and which tools to invoke to produce interleaved responses for visual-critical queries. To systematically evaluate this paradigm, we introduce ATP-Bench, a novel benchmark comprising 7,702 QA pairs (including 1,592 VQA pairs) across eight categories and 25 visual-critical intents, featuring human-verified queries and ground truths. Furthermore, to evaluate agentic planning independent of end-to-end execution and changing tool backends, we propose a Multi-Agent MLLM-as-a-Judge (MAM) system. MAM evaluates tool-call precision, identifies missed opportunities for tool use, and assesses overall response quality without requiring ground-truth references. Our extensive experiments on 10 state-of-the-art MLLMs reveal that models struggle with coherent interleaved planning and exhibit significant variations in tool-use behavior, highlighting substantial room for improvement and providing actionable guidance for advancing interleaved generation. Dataset and code are available at https://github.com/Qwen-Applications/ATP-Bench.