STELLAR: Scene Text Editor for Low-Resource Languages and Real-World Data

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address three key challenges in scene text editing (STE)—limited low-resource language support, domain shift between synthetic and real data, and the lack of quantitative evaluation for text style preservation—this paper proposes: (1) a language-adaptive glyph encoder with a multi-stage training strategy; (2) STIPLAR, the first real-world STE benchmark for low-resource languages; (3) a diffusion-based framework combining synthetic-data pretraining with real-image fine-tuning; and (4) Text Appearance Similarity (TAS), a novel metric unifying font, color, and background consistency quantification. Experiments demonstrate a 2.2% average cross-lingual TAS improvement, significantly enhancing visual fidelity and OCR recognition accuracy. The proposed approach delivers a scalable, quantitatively evaluable solution for multilingual STE.

Technology Category

Application Category

📝 Abstract
Scene Text Editing (STE) is the task of modifying text content in an image while preserving its visual style, such as font, color, and background. While recent diffusion-based approaches have shown improvements in visual quality, key limitations remain: lack of support for low-resource languages, domain gap between synthetic and real data, and the absence of appropriate metrics for evaluating text style preservation. To address these challenges, we propose STELLAR (Scene Text Editor for Low-resource LAnguages and Real-world data). STELLAR enables reliable multilingual editing through a language-adaptive glyph encoder and a multi-stage training strategy that first pre-trains on synthetic data and then fine-tunes on real images. We also construct a new dataset, STIPLAR(Scene Text Image Pairs of Low-resource lAnguages and Real-world data), for training and evaluation. Furthermore, we propose Text Appearance Similarity (TAS), a novel metric that assesses style preservation by independently measuring font, color, and background similarity, enabling robust evaluation even without ground truth. Experimental results demonstrate that STELLAR outperforms state-of-the-art models in visual consistency and recognition accuracy, achieving an average TAS improvement of 2.2% across languages over the baselines.
Problem

Research questions and friction points this paper is trying to address.

Lack of support for low-resource languages in text editing
Domain gap between synthetic and real-world data
Absence of metrics for evaluating text style preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language-adaptive glyph encoder for multilingual editing
Multi-stage training with synthetic and real data
Text Appearance Similarity metric for style evaluation
🔎 Similar Papers
No similar papers found.
Y
Yongdeuk Seo
Major in Industrial Data Science & Engineering, Department of Industrial and Data Engineering, Pukyong National University
Hyun-seok Min
Hyun-seok Min
Tomocube
Machine LearningDeep LearningMedical Image AnalysisImage Technology
Sungchul Choi
Sungchul Choi
Pukyong National University
Machine LearningDeep LearningTechnology AnalysisPatent Analysis