A Large-scale Dataset for Robust Complex Anime Scene Text Detection

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text detection datasets are primarily designed for natural scenes or document images, rendering them inadequate for anime—where text exhibits high stylistic diversity, irregular layouts, frequent visual ambiguity with symbols/decorations, and pervasive use of handwritten and artistic fonts. To address this gap, we introduce AnimeText, the first large-scale, anime-specific text detection dataset, comprising 735K images and 4.2M annotated text instances. We propose a hierarchical annotation scheme and a hard-negative sampling strategy to comprehensively cover non-linear arrangements, multi-font variations, and severe visual interference, while ensuring compatibility with mainstream deep learning frameworks. Cross-dataset benchmarking demonstrates that models trained on AnimeText significantly outperform those trained on general-purpose datasets on anime text detection tasks, validating AnimeText’s effectiveness and robustness.

Technology Category

Application Category

📝 Abstract
Current text detection datasets primarily target natural or document scenes, where text typically appear in regular font and shapes, monotonous colors, and orderly layouts. The text usually arranged along straight or curved lines. However, these characteristics differ significantly from anime scenes, where text is often diverse in style, irregularly arranged, and easily confused with complex visual elements such as symbols and decorative patterns. Text in anime scene also includes a large number of handwritten and stylized fonts. Motivated by this gap, we introduce AnimeText, a large-scale dataset containing 735K images and 4.2M annotated text blocks. It features hierarchical annotations and hard negative samples tailored for anime scenarios. %Cross-dataset evaluations using state-of-the-art methods demonstrate that models trained on AnimeText achieve superior performance in anime text detection tasks compared to existing datasets. To evaluate the robustness of AnimeText in complex anime scenes, we conducted cross-dataset benchmarking using state-of-the-art text detection methods. Experimental results demonstrate that models trained on AnimeText outperform those trained on existing datasets in anime scene text detection tasks. AnimeText on HuggingFace: https://huggingface.co/datasets/deepghs/AnimeText
Problem

Research questions and friction points this paper is trying to address.

Detecting diverse stylized text in anime scenes
Addressing irregular text arrangements in complex anime visuals
Overcoming confusion between text and decorative anime elements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale anime text dataset with 735K images
Hierarchical annotations for complex anime scenarios
Hard negative samples for robust text detection
🔎 Similar Papers
No similar papers found.
Ziyi Dong
Ziyi Dong
SYSU
Image Generationadversarial attack and defenseVision Lanuguage model
Y
Yurui Zhang
DeepGHS (Deep Generative anime Hobbyist Syndicate)
C
Changmao Li
Nanchang Hangkong University
N
Naomi Rue Golding
DeepGHS (Deep Generative anime Hobbyist Syndicate)
Q
Qing Long
DeepGHS (Deep Generative anime Hobbyist Syndicate)