🤖 AI Summary
Existing image-intrinsic machine translation (IIMT) methods are restricted to simplified scenarios—such as white-background, single-line text—and struggle with real-world challenges including complex backgrounds and nested multilingual subtitles. To address this, we propose a novel IIMT paradigm tailored for realistic scenes. First, we construct the first benchmark dataset for IIMT featuring diverse natural backgrounds. Second, we design DebackX, a three-stage co-optimized framework that decouples background and text, performs direct text-image translation, and reconstructs the translated image via fusion. DebackX comprises a U-Net-based separation module, an end-to-end translation network, and a multi-scale feature fusion rendering mechanism. Evaluated on a real-background subtitle test set, DebackX achieves +12.3 BLEU points, +4.1 dB PSNR, and +0.18 SSIM over prior methods, significantly improving both translation accuracy and visual fidelity.
📝 Abstract
In-Image Machine Translation (IIMT) aims to translate texts within images from one language to another. Previous research on IIMT was primarily conducted on simplified scenarios such as images of one-line text with black font in white backgrounds, which is far from reality and impractical for applications in the real world. To make IIMT research practically valuable, it is essential to consider a complex scenario where the text backgrounds are derived from real-world images. To facilitate research of complex scenario IIMT, we design an IIMT dataset that includes subtitle text with real-world background. However previous IIMT models perform inadequately in complex scenarios. To address the issue, we propose the DebackX model, which separates the background and text-image from the source image, performs translation on text-image directly, and fuses the translated text-image with the background, to generate the target image. Experimental results show that our model achieves improvements in both translation quality and visual effect.