Doc-Researcher: A Unified System for Multimodal Document Parsing and Deep Research

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep learning systems struggle to effectively process multimodal documents—such as those containing figures, tables, and mathematical formulas—due to insufficient visual-semantic preservation, structure-aware chunking, and cross-modal adaptive retrieval. This paper introduces the first unified multimodal framework tailored for in-depth research, integrating layout-aware parsing, joint image-text embedding, multi-granularity dynamic retrieval, and multi-agent collaborative reasoning to enable precise responses to complex, multi-document queries. Our contributions are two-fold: (1) We establish M4DocBench, the first multimodal deep research benchmark supporting multi-hop reasoning, multi-document grounding, and multi-turn interaction; (2) On M4DocBench, our framework achieves 50.6% accuracy—3.4× higher than current state-of-the-art methods—demonstrating the efficacy of deep multimodal parsing and cross-modal collaborative reasoning.

Technology Category

Application Category

📝 Abstract
Deep Research systems have revolutionized how LLMs solve complex questions through iterative reasoning and evidence gathering. However, current systems remain fundamentally constrained to textual web data, overlooking the vast knowledge embedded in multimodal documents Processing such documents demands sophisticated parsing to preserve visual semantics (figures, tables, charts, and equations), intelligent chunking to maintain structural coherence, and adaptive retrieval across modalities, which are capabilities absent in existing systems. In response, we present Doc-Researcher, a unified system that bridges this gap through three integrated components: (i) deep multimodal parsing that preserves layout structure and visual semantics while creating multi-granular representations from chunk to document level, (ii) systematic retrieval architecture supporting text-only, vision-only, and hybrid paradigms with dynamic granularity selection, and (iii) iterative multi-agent workflows that decompose complex queries, progressively accumulate evidence, and synthesize comprehensive answers across documents and modalities. To enable rigorous evaluation, we introduce M4DocBench, the first benchmark for Multi-modal, Multi-hop, Multi-document, and Multi-turn deep research. Featuring 158 expert-annotated questions with complete evidence chains across 304 documents, M4DocBench tests capabilities that existing benchmarks cannot assess. Experiments demonstrate that Doc-Researcher achieves 50.6% accuracy, 3.4xbetter than state-of-the-art baselines, validating that effective document research requires not just better retrieval, but fundamentally deep parsing that preserve multimodal integrity and support iterative research. Our work establishes a new paradigm for conducting deep research on multimodal document collections.
Problem

Research questions and friction points this paper is trying to address.

Processing multimodal documents with visual semantics preservation
Enabling adaptive retrieval across text and visual modalities
Supporting iterative evidence gathering across multi-document collections
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep multimodal parsing preserves layout structure and visual semantics
Systematic retrieval architecture supports hybrid and dynamic granularity
Iterative multi-agent workflows decompose queries and synthesize answers
🔎 Similar Papers
No similar papers found.
Kuicai Dong
Kuicai Dong
Huawei Noah's Ark Lab, Nanyang Technological University
Natural Language ProcessingInformation ExtractionInformation RetrievalRAGRecommendation
S
Shurui Huang
Huawei Technologies Co., Ltd.
F
Fangda Ye
Huawei Technologies Co., Ltd.
W
Wei Han
Huawei Technologies Co., Ltd.
Z
Zhi Zhang
Huawei Technologies Co., Ltd.
Dexun Li
Dexun Li
Singapore Management University
Reinforcement LearningResource OptimisationRecommendation System
W
Wenjun Li
Huawei Technologies Co., Ltd.
Qu Yang
Qu Yang
National University of Singapore
Deep LearningSpiking Neural NetworkNeuromprphic Computing
G
Gang Wang
Huawei Technologies Co., Ltd.
Y
Yichao Wang
Huawei Technologies Co., Ltd.
C
Chen Zhang
Huawei Technologies Co., Ltd.
Y
Yong Liu
Huawei Technologies Co., Ltd.