Hyper-Local Deformable Transformers for Text Spotting on Historical Maps

๐Ÿ“… 2024-08-24
๐Ÿ›๏ธ Knowledge Discovery and Data Mining
๐Ÿ“ˆ Citations: 3
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Historical map text contains critical spatiotemporal and cultural information, yet its extraction suffers from poor method generalizability and severe scarcity of annotated training dataโ€”particularly for text with complex backgrounds, long sequences, or high-angle rotations. To address these challenges, we propose PALETTE, an end-to-end framework featuring: (i) a novel super-local sampling module and super-local positional encoding to explicitly model boundary points and character-level features while capturing cross-instance spatial interactions; and (ii) SYNTHMAP+, a synthetic data generation pipeline that mitigates the shortage of real-world annotations. PALETTE integrates these innovations via a deformable Transformer architecture. Evaluated on two newly established historical map benchmarks, it significantly outperforms state-of-the-art methods, especially in recognizing long and highly rotated text. Deployed on over 60,000 scanned historical maps, PALETTE has generated more than 100 million high-quality text labels.

Technology Category

Application Category

๐Ÿ“ Abstract
Text on historical maps contains valuable information providing georeferenced historical, political, and cultural contexts. However, text extraction from historical maps has been challenging due to the lack of (1) effective methods and (2) training data. Previous approaches use ad-hoc steps tailored to only specific map styles. Recent machine learning-based text spotters (e.g., for scene images) have the potential to solve these challenges because of their flexibility in supporting various types of text instances. However, these methods remain challenges in extracting precise image features for predicting every sub-component (boundary points and characters) in a text instance. This is critical because map text can be lengthy and highly rotated with complex backgrounds, posing difficulties in detecting relevant image features from a rough text region. This paper proposes PALETTE, an end-to-end text spotter for scanned historical maps of a wide variety. PALETTE introduces a novel hyper-local sampling module to explicitly learn localized image features around the target boundary points and characters of a text instance for detection and recognition. PALETTE also enables hyper-local positional embeddings to learn spatial interactions between boundary points and characters within and across text instances. In addition, this paper presents a novel approach to automatically generate synthetic map images, SYNTHMAP+, for training text spotters for historical maps. The experiment shows that PALETTE with SYNTHMAP+ outperforms SOTA text spotters on two new benchmark datasets of historical maps, particularly for long and angled text. We have deployed PALETTE with SYNTHMAP+ to process over 60,000 maps in the David Rumsey Historical Map collection and generated over 100 million text labels to support map searching.
Problem

Research questions and friction points this paper is trying to address.

Extracting text from historical maps lacks effective methods and training data.
Existing methods struggle with precise feature extraction for complex map text.
Need for robust text spotting on diverse, rotated, lengthy historical map texts.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hyper-local sampling module for precise feature learning
Hyper-local positional embeddings for spatial interactions
SynthMap+ for automatic synthetic map generation
๐Ÿ”Ž Similar Papers
No similar papers found.