🤖 AI Summary
Historical maps exhibit highly variable text orientations and irregular layouts, making it challenging for existing methods to accurately link fragmented textual entities (e.g., multi-word place names); this limitation stems primarily from neglecting geometric layout cues while over-relying on linguistic features. To address this, we propose a geometry-aware multimodal text linking method built upon LayoutLMv3. Our approach introduces a geometry-aware embedding module that explicitly encodes polygonal text coordinates and spatial relationships, and incorporates a bidirectional reading-order modeling mechanism to deeply fuse visual, linguistic, and geometric features. To our knowledge, this is the first work to achieve effective synergy between spatial layout and semantic information for historical map text linking. Evaluated on the ICDAR 2024/2025 MapText competition datasets, our method significantly outperforms prior state-of-the-art approaches, demonstrating both the efficacy and advancement of integrating geometric priors into multimodal modeling for structured understanding of historical map text.
📝 Abstract
Text on historical maps provides valuable information for studies in history, economics, geography, and other related fields. Unlike structured or semi-structured documents, text on maps varies significantly in orientation, reading order, shape, and placement. Many modern methods can detect and transcribe text regions, but they struggle to effectively ``link'' the recognized text fragments, e.g., determining a multi-word place name. Existing layout analysis methods model word relationships to improve text understanding in structured documents, but they primarily rely on linguistic features and neglect geometric information, which is essential for handling map text. To address these challenges, we propose LIGHT, a novel multi-modal approach that integrates linguistic, image, and geometric features for linking text on historical maps. In particular, LIGHT includes a geometry-aware embedding module that encodes the polygonal coordinates of text regions to capture polygon shapes and their relative spatial positions on an image. LIGHT unifies this geometric information with the visual and linguistic token embeddings from LayoutLMv3, a pretrained layout analysis model. LIGHT uses the cross-modal information to predict the reading-order successor of each text instance directly with a bi-directional learning strategy that enhances sequence robustness. Experimental results show that LIGHT outperforms existing methods on the ICDAR 2024/2025 MapText Competition data, demonstrating the effectiveness of multi-modal learning for historical map text linking.