๐ค AI Summary
Existing text-image machine translation (TIMT) methods neglect text positional information, hindering fine-grained, layout-preserving translation. To address this, we propose Position-Aware TIMT (PATIMT), which formalizes two complementary tasks: region-specific translation and full-image translation with explicit localization. Our method introduces a novel position-aware mechanism and establishes PATIMT-Benchโa benchmark comprising ten diverse real-world scenarios. We further design an adaptive OCR optimization pipeline and fine-tune compact vision-language models using a human-annotated high-precision test set. Experiments demonstrate that PATIMT achieves state-of-the-art performance on both subtasks, significantly improving text localization accuracy and translation quality. Ablation studies and cross-scenario evaluations confirm the methodโs scalability and strong generalization capability across domains.
๐ Abstract
Text Image Machine Translation (TIMT) aims to translate texts embedded within an image into another language. Current TIMT studies primarily focus on providing translations for all the text within an image, while neglecting to provide bounding boxes and covering limited scenarios. In this work, we extend traditional TIMT into position-aware TIMT (PATIMT), aiming to support fine-grained and layoutpreserving translation, which holds great practical value but remains largely unexplored. This task comprises two key sub-tasks: regionspecific translation and full-image translation with grounding. To support existing models on PATIMT and conduct fair evaluation, we construct the PATIMT benchmark (PATIMTBench), which consists of 10 diverse real-world scenarios. Specifically, we introduce an Adaptive Image OCR Refinement Pipeline, which adaptively selects appropriate OCR tools based on scenario and refines the results of text-rich images. To ensure evaluation reliability, we further construct a test set, which contains 1,200 high-quality instances manually annotated and reviewed by human experts. After fine-tuning on our data, compact Large Vision-Language Models (LVLMs) achieve state-of-the-art performance on both sub-tasks. Experimental results also highlight the scalability and generalizability of our training data