🤖 AI Summary
This work addresses the high computational cost and inefficiency in high-resolution document image parsing, often caused by visual redundancy. To this end, we propose PaddleOCR-VL, a coarse-to-fine vision-language architecture featuring a lightweight Visual Region Focusing Module (VRFM). VRFM dynamically directs the model’s attention to semantically critical regions by jointly performing spatial localization and contextual relationship prediction, thereby effectively suppressing background redundancy. Built upon a compact 0.9B-parameter vision-language model—PaddleOCR-VL-0.9B—our approach significantly reduces the number of visual tokens while streamlining the inference pipeline. Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance on both page-level and element-level document parsing tasks, delivering high accuracy with fewer parameters and lower computational overhead.
📝 Abstract
Document parsing is a fine-grained task where image resolution significantly impacts performance. While advanced research leveraging vision-language models benefits from high-resolution input to boost model performance, this often leads to a quadratic increase in the number of vision tokens and significantly raises computational costs. We attribute this inefficiency to substantial visual regions redundancy in document images, like background. To tackle this, we propose PaddleOCR-VL, a novel coarse-to-fine architecture that focuses on semantically relevant regions while suppressing redundant ones, thereby improving both efficiency and performance. Specifically, we introduce a lightweight Valid Region Focus Module (VRFM) which leverages localization and contextual relationship prediction capabilities to identify valid vision tokens. Subsequently, we design and train a compact yet powerful 0.9B vision-language model (PaddleOCR-VL-0.9B) to perform detailed recognition, guided by VRFM outputs to avoid direct processing of the entire large image. Extensive experiments demonstrate that PaddleOCR-VL achieves state-of-the-art performance in both page-level parsing and element-level recognition. It significantly outperforms existing solutions, exhibits strong competitiveness against top-tier VLMs, and delivers fast inference while utilizing substantially fewer vision tokens and parameters, highlighting the effectiveness of targeted coarse-to-fine parsing for accurate and efficient document understanding. The source code and models are publicly available at https://github.com/PaddlePaddle/PaddleOCR.