The Devil is in Fine-tuning and Long-tailed Problems:A New Benchmark for Scene Text Detection

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Scene text detection achieves strong performance on academic benchmarks but suffers from poor generalization in real-world scenarios. This work identifies two root causes: (1) the fine-tuning gap—domain-specific optimization (DSO) degrades cross-domain robustness; and (2) failure to detect sparse, complex text instances (e.g., stylized, occluded, or overlapping text) due to long-tailed class distributions. To address these issues, we propose Joint-Dataset Learning (JDL), a paradigm that replaces DSO with multi-dataset co-training. We further introduce LTB, the first fine-grained benchmark explicitly designed for long-tail scene text detection, covering three major categories and 13 error subtypes. Additionally, we design MAEDet, a self-supervised learning–based strong baseline. Experiments demonstrate that MAEDet consistently outperforms existing methods on LTB, achieving significant gains—especially on tail-class samples. Code and the LTB benchmark are publicly released.

Technology Category

Application Category

📝 Abstract
Scene text detection has seen the emergence of high-performing methods that excel on academic benchmarks. However, these detectors often fail to replicate such success in real-world scenarios. We uncover two key factors contributing to this discrepancy through extensive experiments. First, a extit{Fine-tuning Gap}, where models leverage extit{Dataset-Specific Optimization} (DSO) paradigm for one domain at the cost of reduced effectiveness in others, leads to inflated performances on academic benchmarks. Second, the suboptimal performance in practical settings is primarily attributed to the long-tailed distribution of texts, where detectors struggle with rare and complex categories as artistic or overlapped text. Given that the DSO paradigm might undermine the generalization ability of models, we advocate for a extit{Joint-Dataset Learning} (JDL) protocol to alleviate the Fine-tuning Gap. Additionally, an error analysis is conducted to identify three major categories and 13 subcategories of challenges in long-tailed scene text, upon which we propose a Long-Tailed Benchmark (LTB). LTB facilitates a comprehensive evaluation of ability to handle a diverse range of long-tailed challenges. We further introduce MAEDet, a self-supervised learning-based method, as a strong baseline for LTB. The code is available at https://github.com/pd162/LTB.
Problem

Research questions and friction points this paper is trying to address.

Addressing the fine-tuning gap in scene text detection models
Improving performance on long-tailed text distributions in real-world scenarios
Proposing a joint-dataset learning protocol to enhance generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint-Dataset Learning protocol for generalization
Long-Tailed Benchmark for diverse challenges
Self-supervised MAEDet as baseline model
🔎 Similar Papers
No similar papers found.
T
Tianjiao Cao
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
J
Jiahao Lyu
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
Weichao Zeng
Weichao Zeng
Institute of Information Engineering, Chinese Academy of Sciences
Computer Vision
W
Weimin Mu
Institute of Information Engineering, Chinese Academy of Sciences
Y
Yu Zhou
VCIP & TMCC & DISSec, College of Computer Science, Nankai University