🤖 AI Summary
Existing document layout analysis methods rely heavily on OCR, which not only incurs substantial computational overhead but also suffers from recognition errors that compromise robustness and practicality. To address these limitations, this work proposes the first purely visual framework that abandons multimodal fusion paradigms and instead leverages the intrinsic visual structure, spatial sensitivity, and inter-element relationships within documents for efficient layout analysis. The approach introduces a bidirectional spatially guided deformable attention module and a dynamic Graph Refinement Classifier (GRC) to explicitly capture positional dependencies and contextual associations. The method sets a new state-of-the-art among purely visual approaches on DocLayNet and outperforms a multimodal model with four times more parameters (256M vs. 65M) on M6Doc, demonstrating both high efficiency and strong robustness.
📝 Abstract
Document layout analysis aims to detect and categorize structural elements (e.g., titles, tables, figures) in scanned or digital documents. Popular methods often rely on high-quality Optical Character Recognition (OCR) to merge visual features with extracted text. This dependency introduces two major drawbacks: propagation of text recognition errors and substantial computational overhead, limiting the robustness and practical applicability of multimodal approaches. In contrast to the prevailing multimodal trend, we argue that effective layout analysis depends not on text-visual fusion, but on a deep understanding of documents'intrinsic visual structure. To this end, we propose PARL (Position-Aware Relation Learning Network), a novel OCR-free, vision-only framework that models layout through positional sensitivity and relational structure. Specifically, we first introduce a Bidirectional Spatial Position-Guided Deformable Attention module to embed explicit positional dependencies among layout elements directly into visual features. Second, we design a Graph Refinement Classifier (GRC) to refine predictions by modeling contextual relationships through a dynamically constructed layout graph. Extensive experiments show PARL achieves state-of-the-art results. It establishes a new benchmark for vision-only methods on DocLayNet and, notably, surpasses even strong multimodal models on M6Doc. Crucially, PARL (65M) is highly efficient, using roughly four times fewer parameters than large multimodal models (256M), demonstrating that sophisticated visual structure modeling can be both more efficient and robust than multimodal fusion.