🤖 AI Summary
Existing benchmarks for in-image machine translation (IIMT) predominantly rely on synthetic data, which fails to capture the complexity of real-world scenarios, and employ unimodal evaluation metrics that overlook cross-modal consistency between translated text and the underlying image. To address these limitations, this work proposes IMTBench—the first multilingual IIMT benchmark tailored for real-world settings—comprising 2,500 samples across four scene types and nine languages. IMTBench introduces a cross-modal alignment scoring mechanism that enables holistic evaluation along multiple dimensions: translation quality, contextual preservation, image fidelity, and cross-modal consistency. Comprehensive experiments using both multimodal unified models and cascaded systems reveal significant performance bottlenecks of current approaches in natural scenes and low-resource languages, thereby validating the effectiveness of IMTBench and establishing a standardized framework with clear directions for future research.
📝 Abstract
End-to-end In-Image Machine Translation (IIMT) aims to convert text embedded within an image into a target language while preserving the original visual context, layout, and rendering style. However, existing IIMT benchmarks are largely synthetic and thus fail to reflect real-world complexity, while current evaluation protocols focus on single-modality metrics and overlook cross-modal faithfulness between rendered text and model outputs. To address these shortcomings, we present In-image Machine Translation Benchmark (IMTBench), a new benchmark of 2,500 image translation samples covering four practical scenarios and nine languages. IMTBench supports multi-aspect evaluation, including translation quality, background preservation, overall image quality, and a cross-modal alignment score that measures consistency between the translated text produced by the model and the text rendered in the translated image. We benchmark strong commercial cascade systems, and both closed- and open-source unified multi-modal models, and observe large performance gaps across scenarios and languages, especially on natural scenes and resource-limited languages, highlighting substantial headroom for end-to-end image text translation. We hope IMTBench establishes a standardized benchmark to accelerate progress in this emerging task.