PP-DocBee: Improving Multimodal Document Understanding Through a Bag of Tricks

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of structural parsing and cross-lingual semantic understanding in document image analysis, this paper introduces DocMLM, an end-to-end multimodal large language model. Methodologically: (1) we propose a controllable synthetic data generation strategy tailored to document scenarios to mitigate the scarcity of real annotated data; (2) we pioneer a dynamic proportional sampling mechanism jointly optimized with OCR post-processing to enhance text recognition robustness and layout awareness; (3) we enable unified end-to-end joint training across visual features, OCR-extracted text, and semantic representations. Evaluated on English benchmarks (e.g., DocVQA, FUNSD), DocMLM achieves state-of-the-art performance. On Chinese tasks (e.g., CMU-DOC, CnDoc), it significantly outperforms existing open-source and commercial models. Notably, DocMLM is the first model to deliver high-performance, unified document understanding for both English and Chinese, bridging the gap between multilingual document parsing and multimodal reasoning.

Technology Category

Application Category

📝 Abstract
With the rapid advancement of digitalization, various document images are being applied more extensively in production and daily life, and there is an increasingly urgent need for fast and accurate parsing of the content in document images. Therefore, this report presents PP-DocBee, a novel multimodal large language model designed for end-to-end document image understanding. First, we develop a data synthesis strategy tailored to document scenarios in which we build a diverse dataset to improve the model generalization. Then, we apply a few training techniques, including dynamic proportional sampling, data preprocessing, and OCR postprocessing strategies. Extensive evaluations demonstrate the superior performance of PP-DocBee, achieving state-of-the-art results on English document understanding benchmarks and even outperforming existing open source and commercial models in Chinese document understanding. The source code and pre-trained models are publicly available at href{https://github.com/PaddlePaddle/PaddleMIX}{https://github.com/PaddlePaddle/PaddleMIX}.
Problem

Research questions and friction points this paper is trying to address.

Enhances multimodal document image understanding accuracy.
Develops data synthesis for improved model generalization.
Achieves state-of-the-art results in document understanding benchmarks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal large language model for document understanding
Data synthesis strategy for improved generalization
Dynamic sampling and OCR postprocessing techniques
🔎 Similar Papers
F
Feng Ni
PaddlePaddle Team, Baidu Inc.
Kui Huang
Kui Huang
baidu
Y
Yao Lu
PaddlePaddle Team, Baidu Inc.
W
Wenyu Lv
PaddlePaddle Team, Baidu Inc.
G
Guanzhong Wang
PaddlePaddle Team, Baidu Inc.
Zeyu Chen
Zeyu Chen
Peking University, School of Basic Medical Sciences
Y
Yi Liu
PaddlePaddle Team, Baidu Inc.