🤖 AI Summary
High-resolution document parsing faces a fundamental trade-off between fine-grained recognition (e.g., dense text, mathematical formulas, tables) and computational efficiency. Method: We propose a coarse-to-fine, two-stage decoupled vision-language model: Stage 1 performs global layout analysis on downsampled low-resolution images; Stage 2 executes fine-grained content recognition—text, formulas, tables—within original-resolution local regions localized by the layout. Crucially, layout and content modeling are disentangled, enabling downsampling for acceleration while preserving high-fidelity details in targeted regions. The framework integrates vision-language joint modeling, synthetic data augmentation, and adaptive region cropping. Results: Our method achieves Pareto-optimal accuracy–efficiency trade-offs, outperforming both general-purpose and domain-specific models on multiple standard benchmarks with significantly lower inference cost. It demonstrates strong generalization across diverse document types and layouts.
📝 Abstract
We introduce MinerU2.5, a 1.2B-parameter document parsing vision-language model that achieves state-of-the-art recognition accuracy while maintaining exceptional computational efficiency. Our approach employs a coarse-to-fine, two-stage parsing strategy that decouples global layout analysis from local content recognition. In the first stage, the model performs efficient layout analysis on downsampled images to identify structural elements, circumventing the computational overhead of processing high-resolution inputs. In the second stage, guided by the global layout, it performs targeted content recognition on native-resolution crops extracted from the original image, preserving fine-grained details in dense text, complex formulas, and tables. To support this strategy, we developed a comprehensive data engine that generates diverse, large-scale training corpora for both pretraining and fine-tuning. Ultimately, MinerU2.5 demonstrates strong document parsing ability, achieving state-of-the-art performance on multiple benchmarks, surpassing both general-purpose and domain-specific models across various recognition tasks, while maintaining significantly lower computational overhead.