PATIMT-Bench: A Multi-Scenario Benchmark for Position-Aware Text Image Machine Translation in Large Vision-Language Models

๐Ÿ“… 2025-09-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing text-image machine translation (TIMT) methods neglect text positional information, hindering fine-grained, layout-preserving translation. To address this, we propose Position-Aware TIMT (PATIMT), which formalizes two complementary tasks: region-specific translation and full-image translation with explicit localization. Our method introduces a novel position-aware mechanism and establishes PATIMT-Benchโ€”a benchmark comprising ten diverse real-world scenarios. We further design an adaptive OCR optimization pipeline and fine-tune compact vision-language models using a human-annotated high-precision test set. Experiments demonstrate that PATIMT achieves state-of-the-art performance on both subtasks, significantly improving text localization accuracy and translation quality. Ablation studies and cross-scenario evaluations confirm the methodโ€™s scalability and strong generalization capability across domains.

Technology Category

Application Category

๐Ÿ“ Abstract
Text Image Machine Translation (TIMT) aims to translate texts embedded within an image into another language. Current TIMT studies primarily focus on providing translations for all the text within an image, while neglecting to provide bounding boxes and covering limited scenarios. In this work, we extend traditional TIMT into position-aware TIMT (PATIMT), aiming to support fine-grained and layoutpreserving translation, which holds great practical value but remains largely unexplored. This task comprises two key sub-tasks: regionspecific translation and full-image translation with grounding. To support existing models on PATIMT and conduct fair evaluation, we construct the PATIMT benchmark (PATIMTBench), which consists of 10 diverse real-world scenarios. Specifically, we introduce an Adaptive Image OCR Refinement Pipeline, which adaptively selects appropriate OCR tools based on scenario and refines the results of text-rich images. To ensure evaluation reliability, we further construct a test set, which contains 1,200 high-quality instances manually annotated and reviewed by human experts. After fine-tuning on our data, compact Large Vision-Language Models (LVLMs) achieve state-of-the-art performance on both sub-tasks. Experimental results also highlight the scalability and generalizability of our training data
Problem

Research questions and friction points this paper is trying to address.

Extends text image translation to position-aware layout preservation
Addresses lack of bounding boxes and limited scenarios in TIMT
Introduces benchmark for region-specific and grounded translation evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive OCR pipeline refines text-rich images
Benchmark with 10 diverse real-world scenarios
Fine-tuned LVLMs achieve state-of-the-art performance
๐Ÿ”Ž Similar Papers
No similar papers found.