🤖 AI Summary
To address the low efficiency and high computational cost of recognizing complex layout elements (e.g., text, tables, formulas, figures) in multilingual document parsing, this paper proposes a hyper-compact vision-language model. Methodologically, it innovatively integrates a NaViT-style dynamic-resolution visual encoder with a lightweight ERNIE-4.5-0.3B language model, augmented by multi-task learning and knowledge distillation to achieve efficient multimodal understanding under strict parameter constraints. The model supports 109 languages and achieves state-of-the-art performance on both page-level parsing and layout element recognition. It significantly outperforms existing methods on public and internal benchmarks, while exhibiting fast inference speed and low GPU memory consumption—demonstrating strong suitability for large-scale deployment in real-world scenarios.
📝 Abstract
In this report, we propose PaddleOCR-VL, a SOTA and resource-efficient model tailored for document parsing. Its core component is PaddleOCR-VL-0.9B, a compact yet powerful vision-language model (VLM) that integrates a NaViT-style dynamic resolution visual encoder with the ERNIE-4.5-0.3B language model to enable accurate element recognition. This innovative model efficiently supports 109 languages and excels in recognizing complex elements (e.g., text, tables, formulas, and charts), while maintaining minimal resource consumption. Through comprehensive evaluations on widely used public benchmarks and in-house benchmarks, PaddleOCR-VL achieves SOTA performance in both page-level document parsing and element-level recognition. It significantly outperforms existing solutions, exhibits strong competitiveness against top-tier VLMs, and delivers fast inference speeds. These strengths make it highly suitable for practical deployment in real-world scenarios.