đ€ AI Summary
Existing document parsing methods rely on axis-aligned bounding boxes, struggling with distorted or camera-captured documents and suffering from fragmented pipelines. This work proposes a two-stage unified parsing framework: the first stage jointly classifies document types and analyzes layout structure, while the second stage performs robust full-page parsing tailored to geometric distortions in camera-captured documents or, for digitally native documents, parallel element parsing guided by layout anchors. The approach integrates vision-language models, layout-aware guidance, and a hybrid parsing strategy to support detection of 21 fine-grained element categories, semantic attribute extraction, and indentation-preserving code block recognition. Evaluated on OmniDocBench, the method achieves a 14.78-point overall performance gain and reduces error rates on camera-captured documents by 91%, all while maintaining efficient inference.
đ Abstract
Document parsing has garnered widespread attention as vision-language models (VLMs) advance OCR capabilities. However, the field remains fragmented across dozens of specialized models with varying strengths, forcing users to navigate complex model selection and limiting system scalability. Moreover, existing two-stage approaches depend on axis-aligned bounding boxes for layout detection, failing to handle distorted or photographed documents effectively. To this end, we present Dolphin-v2, a two-stage document image parsing model that substantially improves upon the original Dolphin. In the first stage, Dolphin-v2 jointly performs document type classification (digital-born versus photographed) alongside layout analysis. For digital-born documents, it conducts finer-grained element detection with reading order prediction. In the second stage, we employ a hybrid parsing strategy: photographed documents are parsed holistically as complete pages to handle geometric distortions, while digital-born documents undergo element-wise parallel parsing guided by the detected layout anchors, enabling efficient content extraction. Compared with the original Dolphin, Dolphin-v2 introduces several crucial enhancements: (1) robust parsing of photographed documents via holistic page-level understanding, (2) finer-grained element detection (21 categories) with semantic attribute extraction such as author information and document metadata, and (3) code block recognition with indentation preservation, which existing systems typically lack. Comprehensive evaluations are conducted on DocPTBench, OmniDocBench, and our self-constructed RealDoc-160 benchmark. The results demonstrate substantial improvements: +14.78 points overall on the challenging OmniDocBench and 91% error reduction on photographed documents, while maintaining efficient inference through parallel processing.