Efficient Document Parsing via Parallel Token Prediction

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of autoregressive decoding in vision-language models for document parsing by introducing, for the first time, a parallel token prediction mechanism. The authors propose a plug-and-play, model-agnostic approach that inserts learnable tokens into the input sequence and employs a tailored training objective to enable parallel multi-token generation. To support effective training, they also construct a high-quality, large-scale data generation pipeline for document parsing. Evaluated on OmniDocBench and olmOCR-bench, the method achieves 1.6×–2.2× decoding speedup, substantially improves sample efficiency, effectively mitigates hallucination, and demonstrates strong generalization across diverse document parsing tasks.

Technology Category

Application Category

📝 Abstract
Document parsing, as a fundamental yet crucial vision task, is being revolutionized by vision-language models (VLMs). However, the autoregressive (AR) decoding inherent to VLMs creates a significant bottleneck, severely limiting parsing speed. In this paper, we propose Parallel-Token Prediction (PTP), a plugable, model-agnostic and simple-yet-effective method that enables VLMs to generate multiple future tokens in parallel with improved sample efficiency. Specifically, we insert some learnable tokens into the input sequence and design corresponding training objectives to equip the model with parallel decoding capabilities for document parsing. Furthermore, to support effective training, we develop a comprehensive data generation pipeline that efficiently produces large-scale, high-quality document parsing training data for VLMs. Extensive experiments on OmniDocBench and olmOCR-bench demonstrate that our method not only significantly improves decoding speed (1.6x-2.2x) but also reduces model hallucinations and exhibits strong generalization abilities.
Problem

Research questions and friction points this paper is trying to address.

document parsing
vision-language models
autoregressive decoding
decoding speed
token generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel-Token Prediction
vision-language models
document parsing
parallel decoding
data generation pipeline
🔎 Similar Papers
No similar papers found.
L
Lei Li
Platform and Content Group, Tencent
Ze Zhao
Ze Zhao
shanghai jiao tong university
Meng Li
Meng Li
China University of Mining and Technology
Mining Engineering
Z
Zhongwang Lun
Platform and Content Group, Tencent
Yi Yuan
Yi Yuan
NetEase Fuxi AI Lab
deep learningcomputer vision
X
Xingjing Lu
Platform and Content Group, Tencent
Z
Zheng Wei
Platform and Content Group, Tencent
J
Jiang Bian
Platform and Content Group, Tencent
Z
Zang Li
Platform and Content Group, Tencent