Improving MLLM's Document Image Machine Translation via Synchronously Self-reviewing Its OCR Proficiency

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses catastrophic forgetting of monolingual capabilities—particularly OCR—induced by supervised fine-tuning in multimodal large language models (MLLMs) for document image machine translation (DIMT). We propose Synchronous Self-Check Tuning (SSCT), a novel fine-tuning paradigm that explicitly incorporates OCR text generation as an intermediate step within the translation pipeline. SSCT leverages the model’s own OCR output as self-supervised signal to jointly optimize cross-modal understanding and cross-lingual translation. Its core innovation lies in emulating “bilingual cognitive advantage” via a structured prompting mechanism that synchronously activates and preserves OCR capability during translation training. Experiments on multiple DIMT benchmarks demonstrate that SSCT significantly improves translation quality (BLEU +3.2) while maintaining or even enhancing OCR accuracy (CER −1.8%), effectively mitigating multi-task interference and enabling synergistic generalization across modalities and tasks.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have shown strong performance in document image tasks, especially Optical Character Recognition (OCR). However, they struggle with Document Image Machine Translation (DIMT), which requires handling both cross-modal and cross-lingual challenges. Previous efforts to enhance DIMT capability through Supervised Fine-Tuning (SFT) on the DIMT dataset often result in the forgetting of the model's existing monolingual abilities, such as OCR. To address these challenges, we introduce a novel fine-tuning paradigm, named Synchronously Self-Reviewing (SSR) its OCR proficiency, inspired by the concept "Bilingual Cognitive Advantage". Specifically, SSR prompts the model to generate OCR text before producing translation text, which allows the model to leverage its strong monolingual OCR ability while learning to translate text across languages. Comprehensive experiments demonstrate the proposed SSR learning helps mitigate catastrophic forgetting, improving the generalization ability of MLLMs on both OCR and DIMT tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing MLLM's Document Image Machine Translation performance
Preventing OCR ability forgetting during fine-tuning
Addressing cross-modal and cross-lingual challenges in DIMT
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synchronously Self-Reviewing (SSR) fine-tuning paradigm
Generates OCR text before translation text
Mitigates catastrophic forgetting in MLLMs
🔎 Similar Papers
No similar papers found.
Y
Yupu Liang
State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
Y
Yaping Zhang
State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
Zhiyang Zhang
Zhiyang Zhang
Nanjing University
NLPLLMAgentAIOps
Z
Zhiyuan Chen
State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
Y
Yang Zhao
State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
Lu Xiang
Lu Xiang
Institute of Automation, Chinese Academy of Sciences
Dialogue SystemsNLP
C
Chengqing Zong
State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
Y
Yu Zhou
State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing, China; Fanyu AI Laboratory, Zhongke Fanyu Technology Co., Ltd, Beijing, China