SimpleDoc: Multi-Modal Document Understanding with Dual-Cue Page Retrieval and Iterative Refinement

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
DocVQA requires precise evidence localization and answer generation across multi-page, multimodal documents (text, images, tables), yet existing RAG methods suffer from inefficient retrieval and excessive redundant page retrieval. This paper proposes a lightweight, efficient framework: first, coarse-grained candidate page filtering using Vision-Language Model (VLM) embeddings; second, fine-grained filtering and re-ranking by integrating page-level summaries—forming an “embedding + summary” dual-cue retrieval mechanism. A single VLM-based reasoning agent then iteratively invokes this retriever, dynamically expanding its working memory until generating a high-confidence answer. The core innovations lie in the dual-cue retrieval design and the iterative working memory update strategy. Evaluated on four mainstream DocVQA benchmarks, our method achieves an average 3.2% improvement in answer accuracy while significantly reducing the number of retrieved pages.

Technology Category

Application Category

📝 Abstract
Document Visual Question Answering (DocVQA) is a practical yet challenging task, which is to ask questions based on documents while referring to multiple pages and different modalities of information, e.g, images and tables. To handle multi-modality, recent methods follow a similar Retrieval Augmented Generation (RAG) pipeline, but utilize Visual Language Models (VLMs) based embedding model to embed and retrieve relevant pages as images, and generate answers with VLMs that can accept an image as input. In this paper, we introduce SimpleDoc, a lightweight yet powerful retrieval - augmented framework for DocVQA. It boosts evidence page gathering by first retrieving candidates through embedding similarity and then filtering and re-ranking these candidates based on page summaries. A single VLM-based reasoner agent repeatedly invokes this dual-cue retriever, iteratively pulling fresh pages into a working memory until the question is confidently answered. SimpleDoc outperforms previous baselines by 3.2% on average on 4 DocVQA datasets with much fewer pages retrieved. Our code is available at https://github.com/ag2ai/SimpleDoc.
Problem

Research questions and friction points this paper is trying to address.

Handles multi-modal document understanding with dual-cue retrieval
Improves accuracy in Document Visual Question Answering (DocVQA)
Reduces retrieved pages while boosting performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-cue page retrieval with embedding and summaries
Iterative refinement using VLM-based reasoner agent
Lightweight framework for multi-modal document understanding
🔎 Similar Papers
No similar papers found.
C
Chelsi Jain
Oregon State University
Y
Yiran Wu
Pennsylvania State University
Yifan Zeng
Yifan Zeng
PhD Student, Oregon State University
Large Language ModelAgentic AIReinforcement LearningDeep Learning
J
Jiale Liu
Pennsylvania State University
S
S hengyu Dai
Johnson & Johnson
Z
Zhenwen Shao
Johnson & Johnson
Qingyun Wu
Qingyun Wu
The Pennsylvania State University
Agentic AI
H
Huazheng Wang
Oregon State University, AG2AI, Inc.