🤖 AI Summary
Scene text detection achieves strong performance on academic benchmarks but suffers from poor generalization in real-world scenarios. This work identifies two root causes: (1) the fine-tuning gap—domain-specific optimization (DSO) degrades cross-domain robustness; and (2) failure to detect sparse, complex text instances (e.g., stylized, occluded, or overlapping text) due to long-tailed class distributions. To address these issues, we propose Joint-Dataset Learning (JDL), a paradigm that replaces DSO with multi-dataset co-training. We further introduce LTB, the first fine-grained benchmark explicitly designed for long-tail scene text detection, covering three major categories and 13 error subtypes. Additionally, we design MAEDet, a self-supervised learning–based strong baseline. Experiments demonstrate that MAEDet consistently outperforms existing methods on LTB, achieving significant gains—especially on tail-class samples. Code and the LTB benchmark are publicly released.
📝 Abstract
Scene text detection has seen the emergence of high-performing methods that excel on academic benchmarks. However, these detectors often fail to replicate such success in real-world scenarios. We uncover two key factors contributing to this discrepancy through extensive experiments. First, a extit{Fine-tuning Gap}, where models leverage extit{Dataset-Specific Optimization} (DSO) paradigm for one domain at the cost of reduced effectiveness in others, leads to inflated performances on academic benchmarks. Second, the suboptimal performance in practical settings is primarily attributed to the long-tailed distribution of texts, where detectors struggle with rare and complex categories as artistic or overlapped text. Given that the DSO paradigm might undermine the generalization ability of models, we advocate for a extit{Joint-Dataset Learning} (JDL) protocol to alleviate the Fine-tuning Gap. Additionally, an error analysis is conducted to identify three major categories and 13 subcategories of challenges in long-tailed scene text, upon which we propose a Long-Tailed Benchmark (LTB). LTB facilitates a comprehensive evaluation of ability to handle a diverse range of long-tailed challenges. We further introduce MAEDet, a self-supervised learning-based method, as a strong baseline for LTB. The code is available at https://github.com/pd162/LTB.