Efficient and Accurate Scene Text Recognition with Cascaded-Transformers

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the excessive computational and memory overhead of visual Transformer encoders in scene text recognition (STR), this paper proposes a cascaded visual Transformer architecture that dynamically shortens the visual token sequence via progressive token downsampling, thereby significantly reducing redundant computation while preserving long-range contextual modeling capability. The method is seamlessly integrated into end-to-end ViT-decoder STR frameworks without requiring additional post-processing or pretraining adjustments. Evaluated on standard benchmarks, it achieves state-of-the-art accuracy (92.68% vs. baseline 92.77%) while reducing overall computational complexity by 48% and accelerating inference by 1.9×. Its core innovation lies in the first introduction of a cascaded encoder structure coupled with a learnable token compression mechanism for STR, effectively balancing efficiency and representational capacity. This yields a practical, high-accuracy, low-overhead solution suitable for resource-constrained deployment scenarios.

Technology Category

Application Category

📝 Abstract
In recent years, vision transformers with text decoder have demonstrated remarkable performance on Scene Text Recognition (STR) due to their ability to capture long-range dependencies and contextual relationships with high learning capacity. However, the computational and memory demands of these models are significant, limiting their deployment in resource-constrained applications. To address this challenge, we propose an efficient and accurate STR system. Specifically, we focus on improving the efficiency of encoder models by introducing a cascaded-transformers structure. This structure progressively reduces the vision token size during the encoding step, effectively eliminating redundant tokens and reducing computational cost. Our experimental results confirm that our STR system achieves comparable performance to state-of-the-art baselines while substantially decreasing computational requirements. In particular, for large-models, the accuracy remains same, 92.77 to 92.68, while computational complexity is almost halved with our structure.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational cost in Scene Text Recognition models
Improving efficiency of encoder models with cascaded-transformers
Maintaining accuracy while halving computational complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cascaded-transformers reduce token size progressively
Eliminates redundant tokens for efficiency
Halves computational cost maintaining accuracy
🔎 Similar Papers
No similar papers found.
S
Savas Ozkan
Samsung Research, United Kingdom
Andrea Maracani
Andrea Maracani
Samsung Research UK
Machine learningComputer VisionNLP
Hyowon Kim
Hyowon Kim
Samsung Electronics, South Korea
S
Sijun Cho
Samsung Electronics, South Korea
Eunchung Noh
Eunchung Noh
Seoul National University
Natural Language ProcessingNeuroscience
J
Jeongwon Min
Samsung Electronics, South Korea
J
Jung Min Cho
Samsung Electronics, South Korea
M
Mete Ozay
Samsung Research, United Kingdom