🤖 AI Summary
Addressing the challenges of cross-script generalization (e.g., Latin, Chinese, cipher scripts) and high annotation cost for character-level supervision in text-line recognition (OCR/HTR), this paper proposes DTLR—an end-to-end detection-based text-line recognition framework. DTLR reformulates text-line recognition as a parallel character detection task, abandoning autoregressive decoding. It is trained solely with line-level supervision, eliminating the need for costly character-level annotations. Key technical contributions include: (1) a Transformer-based multi-instance detector for simultaneous character localization and classification; (2) synthetic data pretraining; (3) dynamic masked consistency learning to enhance robustness; and (4) cross-script transfer strategies. DTLR achieves new state-of-the-art results on CASIA v2 (Chinese), Borg, and Copiale (cipher scripts), demonstrating significantly improved generalization across multilingual and low-resource scripts while drastically reducing annotation dependency.
📝 Abstract
We introduce a general detection-based approach to text line recognition, be it printed (OCR) or handwritten (HTR), with Latin, Chinese, or ciphered characters. Detection-based approaches have until now been largely discarded for HTR because reading characters separately is often challenging, and character-level annotation is difficult and expensive. We overcome these challenges thanks to three main insights: (i) synthetic pre-training with sufficiently diverse data enables learning reasonable character localization for any script; (ii) modern transformer-based detectors can jointly detect a large number of instances, and, if trained with an adequate masking strategy, leverage consistency between the different detections; (iii) once a pre-trained detection model with approximate character localization is available, it is possible to fine-tune it with line-level annotation on real data, even with a different alphabet. Our approach, dubbed DTLR, builds on a completely different paradigm than state-of-the-art HTR methods, which rely on autoregressive decoding, predicting character values one by one, while we treat a complete line in parallel. Remarkably, we demonstrate good performance on a large range of scripts, usually tackled with specialized approaches. In particular, we improve state-of-the-art performances for Chinese script recognition on the CASIA v2 dataset, and for cipher recognition on the Borg and Copiale datasets. Our code and models are available at https://github.com/raphael-baena/DTLR.