PARL: Position-Aware Relation Learning Network for Document Layout Analysis

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing document layout analysis methods rely heavily on OCR, which not only incurs substantial computational overhead but also suffers from recognition errors that compromise robustness and practicality. To address these limitations, this work proposes the first purely visual framework that abandons multimodal fusion paradigms and instead leverages the intrinsic visual structure, spatial sensitivity, and inter-element relationships within documents for efficient layout analysis. The approach introduces a bidirectional spatially guided deformable attention module and a dynamic Graph Refinement Classifier (GRC) to explicitly capture positional dependencies and contextual associations. The method sets a new state-of-the-art among purely visual approaches on DocLayNet and outperforms a multimodal model with four times more parameters (256M vs. 65M) on M6Doc, demonstrating both high efficiency and strong robustness.

Technology Category

Application Category

📝 Abstract
Document layout analysis aims to detect and categorize structural elements (e.g., titles, tables, figures) in scanned or digital documents. Popular methods often rely on high-quality Optical Character Recognition (OCR) to merge visual features with extracted text. This dependency introduces two major drawbacks: propagation of text recognition errors and substantial computational overhead, limiting the robustness and practical applicability of multimodal approaches. In contrast to the prevailing multimodal trend, we argue that effective layout analysis depends not on text-visual fusion, but on a deep understanding of documents'intrinsic visual structure. To this end, we propose PARL (Position-Aware Relation Learning Network), a novel OCR-free, vision-only framework that models layout through positional sensitivity and relational structure. Specifically, we first introduce a Bidirectional Spatial Position-Guided Deformable Attention module to embed explicit positional dependencies among layout elements directly into visual features. Second, we design a Graph Refinement Classifier (GRC) to refine predictions by modeling contextual relationships through a dynamically constructed layout graph. Extensive experiments show PARL achieves state-of-the-art results. It establishes a new benchmark for vision-only methods on DocLayNet and, notably, surpasses even strong multimodal models on M6Doc. Crucially, PARL (65M) is highly efficient, using roughly four times fewer parameters than large multimodal models (256M), demonstrating that sophisticated visual structure modeling can be both more efficient and robust than multimodal fusion.
Problem

Research questions and friction points this paper is trying to address.

Document Layout Analysis
OCR Dependency
Multimodal Fusion
Visual Structure Understanding
Computational Overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

OCR-free
Position-Aware Attention
Graph Refinement
Document Layout Analysis
Vision-only
🔎 Similar Papers
No similar papers found.
F
Fuyuan Liu
Unisound AI Technology Co.Ltd
D
Dianyu Yu
Unisound AI Technology Co.Ltd, Beihang University
He Ren
He Ren
Applied Materials, Inc
N
Nayu Liu
School of Computer Science and Technology, Tiangong University
X
Xiaomian Kang
MAIS, Institute of Automation, CAS
D
Delai Qiu
Unisound AI Technology Co.Ltd
Fa Zhang
Fa Zhang
Professor,Beijing Institute Technology
Bioinformatics;Bio-Medical Image Processing; High Performance Computing
G
Genpeng Zhen
Unisound AI Technology Co.Ltd
S
Shengping Liu
Unisound AI Technology Co.Ltd
J
Jiaen Liang
Unisound AI Technology Co.Ltd
W
Wei Huang
Unisound AI Technology Co.Ltd
Yining Wang
Yining Wang
NLP Reseacher, Unisound
Natural Language ProcessingMachine Translation
Junnan Zhu
Junnan Zhu
Institute of Automation Chinese Academy of Sciences
Natural Language Processing