🤖 AI Summary
Scene text editing requires modifying textual content while preserving stylistic attributes—including font, lighting, and perspective—yet existing explicit disentanglement methods suffer from complex pipelines and poor generalization. This paper proposes a recognition-editing co-modeling paradigm: a multimodal parallel Transformer decoder jointly predicts the target text and edited image; coupled with cyclic self-supervised fine-tuning to achieve implicit style-content disentanglement—without requiring paired training data. Our method achieves state-of-the-art performance on both synthetic and real-world benchmarks. Moreover, the high-fidelity, challenging edited samples significantly enhance the robustness of downstream text recognition models. Key innovations include (i) joint modeling of text semantics and image generation, (ii) an implicit disentanglement mechanism that avoids explicit attribute decomposition, and (iii) a self-supervised optimization framework driven solely by unpaired data.
📝 Abstract
Scene text editing aims to modify text content within scene images while maintaining style consistency. Traditional methods achieve this by explicitly disentangling style and content from the source image and then fusing the style with the target content, while ensuring content consistency using a pre-trained recognition model. Despite notable progress, these methods suffer from complex pipelines, leading to suboptimal performance in complex scenarios. In this work, we introduce Recognition-Synergistic Scene Text Editing (RS-STE), a novel approach that fully exploits the intrinsic synergy of text recognition for editing. Our model seamlessly integrates text recognition with text editing within a unified framework, and leverages the recognition model's ability to implicitly disentangle style and content while ensuring content consistency. Specifically, our approach employs a multi-modal parallel decoder based on transformer architecture, which predicts both text content and stylized images in parallel. Additionally, our cyclic self-supervised fine-tuning strategy enables effective training on unpaired real-world data without ground truth, enhancing style and content consistency through a twice-cyclic generation process. Built on a relatively simple architecture, mymodel achieves state-of-the-art performance on both synthetic and real-world benchmarks, and further demonstrates the effectiveness of leveraging the generated hard cases to boost the performance of downstream recognition tasks. Code is available at https://github.com/ZhengyaoFang/RS-STE.