🤖 AI Summary
This work addresses the challenging task of end-to-end cross-lingual translation of document images with complex layouts by establishing the first systematic benchmark that jointly models textual semantics and page layout. The study introduces a dual-track framework comprising OCR-free and OCR-based approaches, accommodating both small and large language models, and integrates (optionally) optical character recognition with multimodal natural language processing to formulate a unified paradigm for document image translation. The benchmark attracted participation from 69 teams, yielding 27 valid submissions, and empirical results demonstrate that large models exhibit significant advantages and strong potential for this task.
📝 Abstract
Document Image Machine Translation (DIMT) seeks to translate text embedded in document images from one language to another by jointly modeling both textual content and page layout, bridging optical character recognition (OCR) and natural language processing (NLP). The DIMT 2025 Challenge advances research on end-to-end document image translation, a rapidly evolving area within multimodal document understanding. The competition features two tracks, OCR-free and OCR-based, each with two subtasks for small (less than 1B parameters) and large (greater than 1B parameters) models. Participants submit a single unified DIMT system, with the option to incorporate provided OCR transcripts. Running from December 10, 2024 to April 20, 2025, the competition attracted 69 teams and 27 valid submissions in total. Track 1 had 34 teams and 13 valid submissions, while Track 2 had 35 teams and 14 valid submissions. In this report, we present the challenge motivation, dataset construction, task definitions, evaluation protocol, and a summary of results. Our analysis shows that large-model approaches establish a promising new paradigm for translating complex-layout document images and highlight substantial opportunities for future research.