🤖 AI Summary
To address the slow inference speed of autoregressive decoding, the low accuracy of non-autoregressive models, and the computational overhead and modality misalignment caused by explicit language model integration in scene text recognition (STR), this paper proposes an *Ease-First Iterative Parallel Decoding* architecture. Our method is the first to introduce discrete diffusion mechanisms into STR, enabling bidirectional contextual awareness for image-conditioned text generation. Leveraging an ease-first strategy, it guides iterative parallel decoding, progressively refining the entire token sequence within a single forward pass. By deeply fusing visual and linguistic knowledge in a unified modeling framework, our approach achieves state-of-the-art accuracy on multilingual (Chinese and English) and multi-scale STR benchmarks. It significantly accelerates inference compared to autoregressive methods while outperforming mainstream non-autoregressive alternatives in both accuracy and efficiency.
📝 Abstract
Nowadays, scene text recognition has attracted more and more attention due to its diverse applications. Most state-of-the-art methods adopt an encoder-decoder framework with the attention mechanism, autoregressively generating text from left to right. Despite the convincing performance, this sequential decoding strategy constrains the inference speed. Conversely, non-autoregressive models provide faster, simultaneous predictions but often sacrifice accuracy. Although utilizing an explicit language model can improve performance, it burdens the computational load. Besides, separating linguistic knowledge from vision information may harm the final prediction. In this paper, we propose an alternative solution that uses a parallel and iterative decoder that adopts an easy-first decoding strategy. Furthermore, we regard text recognition as an image-based conditional text generation task and utilize the discrete diffusion strategy, ensuring exhaustive exploration of bidirectional contextual information. Extensive experiments demonstrate that the proposed approach achieves superior results on the benchmark datasets, including both Chinese and English text images.