Multimodal DeepResearcher: Generating Text-Chart Interleaved Reports From Scratch with Agentic Framework

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep research frameworks are limited to pure text generation, lacking systematic exploration of joint text–visualization generation. This paper introduces the first end-to-end, interleaved text-and-chart deep report generation task, emphasizing information-rich chart design and semantic alignment between charts and accompanying text. Methodologically, we propose Formalized Description of Visualization (FDV), a novel representation for visualizations, and develop an LLM-based multi-stage agent framework integrating retrieval-augmented reasoning, task decomposition, and co-generative mechanisms. Our contributions are threefold: (1) the first agent architecture for joint text–chart generation; (2) the open-source multimodal evaluation benchmark MultimodalReportBench; and (3) state-of-the-art performance on this benchmark—achieving an 82% overall win rate with Claude 3.7 Sonnet—significantly outperforming existing baselines.

Technology Category

Application Category

📝 Abstract
Visualizations play a crucial part in effective communication of concepts and information. Recent advances in reasoning and retrieval augmented generation have enabled Large Language Models (LLMs) to perform deep research and generate comprehensive reports. Despite its progress, existing deep research frameworks primarily focus on generating text-only content, leaving the automated generation of interleaved texts and visualizations underexplored. This novel task poses key challenges in designing informative visualizations and effectively integrating them with text reports. To address these challenges, we propose Formal Description of Visualization (FDV), a structured textual representation of charts that enables LLMs to learn from and generate diverse, high-quality visualizations. Building on this representation, we introduce Multimodal DeepResearcher, an agentic framework that decomposes the task into four stages: (1) researching, (2) exemplar report textualization, (3) planning, and (4) multimodal report generation. For the evaluation of generated multimodal reports, we develop MultimodalReportBench, which contains 100 diverse topics served as inputs along with 5 dedicated metrics. Extensive experiments across models and evaluation methods demonstrate the effectiveness of Multimodal DeepResearcher. Notably, utilizing the same Claude 3.7 Sonnet model, Multimodal DeepResearcher achieves an 82% overall win rate over the baseline method.
Problem

Research questions and friction points this paper is trying to address.

Automating text-chart interleaved report generation
Designing informative visualizations integrated with text
Developing structured chart representations for LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured textual representation for chart generation
Agentic framework with four-stage task decomposition
Multimodal report benchmark with diverse metrics
🔎 Similar Papers
No similar papers found.
Zhaorui Yang
Zhaorui Yang
State Key Lab of CAD&CG, Zhejiang University
Trustworthy AILarge Language Model
B
Bo Pan
State Key Lab of CAD&CG, Zhejiang University
H
Han Wang
State Key Lab of CAD&CG, Zhejiang University
Yiyao Wang
Yiyao Wang
State Key Lab of CAD&CG, Zhejiang University
visualization
X
Xingyu Liu
State Key Lab of CAD&CG, Zhejiang University
Minfeng Zhu
Minfeng Zhu
Zhejiang University
VisualisationMath
B
Bo Zhang
State Key Lab of CAD&CG, Zhejiang University
W
Wei Chen
State Key Lab of CAD&CG, Zhejiang University