Exploring In-Image Machine Translation with Real-World Background

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image-intrinsic machine translation (IIMT) methods are restricted to simplified scenarios—such as white-background, single-line text—and struggle with real-world challenges including complex backgrounds and nested multilingual subtitles. To address this, we propose a novel IIMT paradigm tailored for realistic scenes. First, we construct the first benchmark dataset for IIMT featuring diverse natural backgrounds. Second, we design DebackX, a three-stage co-optimized framework that decouples background and text, performs direct text-image translation, and reconstructs the translated image via fusion. DebackX comprises a U-Net-based separation module, an end-to-end translation network, and a multi-scale feature fusion rendering mechanism. Evaluated on a real-background subtitle test set, DebackX achieves +12.3 BLEU points, +4.1 dB PSNR, and +0.18 SSIM over prior methods, significantly improving both translation accuracy and visual fidelity.

Technology Category

Application Category

📝 Abstract
In-Image Machine Translation (IIMT) aims to translate texts within images from one language to another. Previous research on IIMT was primarily conducted on simplified scenarios such as images of one-line text with black font in white backgrounds, which is far from reality and impractical for applications in the real world. To make IIMT research practically valuable, it is essential to consider a complex scenario where the text backgrounds are derived from real-world images. To facilitate research of complex scenario IIMT, we design an IIMT dataset that includes subtitle text with real-world background. However previous IIMT models perform inadequately in complex scenarios. To address the issue, we propose the DebackX model, which separates the background and text-image from the source image, performs translation on text-image directly, and fuses the translated text-image with the background, to generate the target image. Experimental results show that our model achieves improvements in both translation quality and visual effect.
Problem

Research questions and friction points this paper is trying to address.

Translate text within real-world complex background images
Overcome limitations of simplified scenarios in IIMT research
Improve translation quality and visual effect in complex images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Designs dataset with real-world background subtitles
Proposes DebackX model for text-background separation
Fuses translated text with original background
🔎 Similar Papers
No similar papers found.
Yanzhi Tian
Yanzhi Tian
Beijing Instituite of Technology
Machine TranslationLarge Language ModelsVision Language Models
Z
Zeming Liu
School of Computer Science and Engineering, Beihang University
Zhengyang Liu
Zhengyang Liu
Royal Melbourne Hospital, Parkville, Australia
OphthalmologyBiostatistics
Y
Yuhang Guo
School of Computer Science and Technology, Beijing Institute of Technology