SceneTextStylizer: A Training-Free Scene Text Style Transfer Framework with Diffusion Model

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing scene text editing methods struggle to achieve flexible, localized free-style transfer while often compromising text legibility and style consistency. To address this, we propose a training-free diffusion model–based framework. Our method first performs diffusion inversion to extract region-specific text features, then employs a self-attention-guided feature injection module to decouple content and style. A distance-aware dynamic mask ensures precise local editing, while Fourier-domain style enhancement improves texture fidelity. Text generation is further steered by textual prompts. Evaluated across diverse complex scenes, our approach significantly outperforms state-of-the-art methods in three key aspects: visual quality, structural preservation of text geometry, and style fidelity—achieving new SOTA performance.

Technology Category

Application Category

📝 Abstract
With the rapid development of diffusion models, style transfer has made remarkable progress. However, flexible and localized style editing for scene text remains an unsolved challenge. Although existing scene text editing methods have achieved text region editing, they are typically limited to content replacement and simple styles, which lack the ability of free-style transfer. In this paper, we introduce SceneTextStylizer, a novel training-free diffusion-based framework for flexible and high-fidelity style transfer of text in scene images. Unlike prior approaches that either perform global style transfer or focus solely on textual content modification, our method enables prompt-guided style transformation specifically for text regions, while preserving both text readability and stylistic consistency. To achieve this, we design a feature injection module that leverages diffusion model inversion and self-attention to transfer style features effectively. Additionally, a region control mechanism is introduced by applying a distance-based changing mask at each denoising step, enabling precise spatial control. To further enhance visual quality, we incorporate a style enhancement module based on the Fourier transform to reinforce stylistic richness. Extensive experiments demonstrate that our method achieves superior performance in scene text style transformation, outperforming existing state-of-the-art methods in both visual fidelity and text preservation.
Problem

Research questions and friction points this paper is trying to address.

Enables flexible style transfer for text in scene images
Preserves text readability while applying prompt-guided transformations
Achieves precise spatial control through region-based denoising mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free diffusion framework for text style transfer
Feature injection module using diffusion model inversion
Region control mechanism with distance-based changing mask
🔎 Similar Papers
No similar papers found.