🤖 AI Summary
Existing deep learning systems struggle to effectively process multimodal documents—such as those containing figures, tables, and mathematical formulas—due to insufficient visual-semantic preservation, structure-aware chunking, and cross-modal adaptive retrieval. This paper introduces the first unified multimodal framework tailored for in-depth research, integrating layout-aware parsing, joint image-text embedding, multi-granularity dynamic retrieval, and multi-agent collaborative reasoning to enable precise responses to complex, multi-document queries. Our contributions are two-fold: (1) We establish M4DocBench, the first multimodal deep research benchmark supporting multi-hop reasoning, multi-document grounding, and multi-turn interaction; (2) On M4DocBench, our framework achieves 50.6% accuracy—3.4× higher than current state-of-the-art methods—demonstrating the efficacy of deep multimodal parsing and cross-modal collaborative reasoning.
📝 Abstract
Deep Research systems have revolutionized how LLMs solve complex questions through iterative reasoning and evidence gathering. However, current systems remain fundamentally constrained to textual web data, overlooking the vast knowledge embedded in multimodal documents Processing such documents demands sophisticated parsing to preserve visual semantics (figures, tables, charts, and equations), intelligent chunking to maintain structural coherence, and adaptive retrieval across modalities, which are capabilities absent in existing systems. In response, we present Doc-Researcher, a unified system that bridges this gap through three integrated components: (i) deep multimodal parsing that preserves layout structure and visual semantics while creating multi-granular representations from chunk to document level, (ii) systematic retrieval architecture supporting text-only, vision-only, and hybrid paradigms with dynamic granularity selection, and (iii) iterative multi-agent workflows that decompose complex queries, progressively accumulate evidence, and synthesize comprehensive answers across documents and modalities. To enable rigorous evaluation, we introduce M4DocBench, the first benchmark for Multi-modal, Multi-hop, Multi-document, and Multi-turn deep research. Featuring 158 expert-annotated questions with complete evidence chains across 304 documents, M4DocBench tests capabilities that existing benchmarks cannot assess. Experiments demonstrate that Doc-Researcher achieves 50.6% accuracy, 3.4xbetter than state-of-the-art baselines, validating that effective document research requires not just better retrieval, but fundamentally deep parsing that preserve multimodal integrity and support iterative research. Our work establishes a new paradigm for conducting deep research on multimodal document collections.