WildDoc: How Far Are We from Achieving Comprehensive and Robust Document Understanding in the Wild?

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing document understanding benchmarks (e.g., DocVQA, ChartQA) rely predominantly on idealized scanned or digital documents, failing to assess model robustness under real-world degradations such as illumination variation, physical distortion, and motion blur. Method: We introduce WildDoc—the first benchmark for *in-the-wild* document understanding—built from manually collected, multi-source real-world document images. For each document, we systematically capture four controlled variants: illumination, viewing angle, capture distance, and sharpness. We propose a novel evaluation paradigm featuring a multi-condition repeated-capture protocol and a cross-modal robustness assessment framework. Contribution/Results: Evaluating state-of-the-art multimodal large language models (MLLMs) on WildDoc reveals an average performance drop exceeding 40% compared to traditional benchmarks, starkly exposing their vulnerability to realistic distortions. WildDoc thus establishes a new, rigorous standard for developing and evaluating robust document understanding models.

Technology Category

Application Category

📝 Abstract
The rapid advancements in Multimodal Large Language Models (MLLMs) have significantly enhanced capabilities in Document Understanding. However, prevailing benchmarks like DocVQA and ChartQA predominantly comprise extit{scanned or digital} documents, inadequately reflecting the intricate challenges posed by diverse real-world scenarios, such as variable illumination and physical distortions. This paper introduces WildDoc, the inaugural benchmark designed specifically for assessing document understanding in natural environments. WildDoc incorporates a diverse set of manually captured document images reflecting real-world conditions and leverages document sources from established benchmarks to facilitate comprehensive comparisons with digital or scanned documents. Further, to rigorously evaluate model robustness, each document is captured four times under different conditions. Evaluations of state-of-the-art MLLMs on WildDoc expose substantial performance declines and underscore the models' inadequate robustness compared to traditional benchmarks, highlighting the unique challenges posed by real-world document understanding. Our project homepage is available at https://bytedance.github.io/WildDoc.
Problem

Research questions and friction points this paper is trying to address.

Assessing document understanding in natural environments
Evaluating model robustness under diverse real-world conditions
Addressing performance gaps in real-world document comprehension
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces WildDoc benchmark for real-world documents
Uses manually captured diverse document images
Evaluates models under varied conditions for robustness
An-Lan Wang
An-Lan Wang
Student, Sun Yat-Sen University
Computer Vision
Jingqun Tang
Jingqun Tang
ByteDance Inc.
Computer VisionDocument IntelligenceMLLMMultimodal Generative Models
L
Liao Lei
ByteDance, China
H
Hao Feng
ByteDance, China
Q
Qi Liu
ByteDance, China
X
Xiang Fei
ByteDance, China
Jinghui Lu
Jinghui Lu
ByteDance Inc., School of Computer Science, University College Dublin
Natural Language ProcessingMulti-ModalityLLMHuman-in-the-loop Learning
H
Han Wang
ByteDance, China
W
Weiwei Liu
ByteDance, China
H
Hao Liu
ByteDance, China
Y
Yuliang Liu
Huazhong University of Science and Technology, China
Xiang Bai
Xiang Bai
Huazhong University of Science and Technology (HUST)
Computer VisionOCR
C
Can Huang
ByteDance, China