🤖 AI Summary
This work addresses catastrophic forgetting of monolingual capabilities—particularly OCR—induced by supervised fine-tuning in multimodal large language models (MLLMs) for document image machine translation (DIMT). We propose Synchronous Self-Check Tuning (SSCT), a novel fine-tuning paradigm that explicitly incorporates OCR text generation as an intermediate step within the translation pipeline. SSCT leverages the model’s own OCR output as self-supervised signal to jointly optimize cross-modal understanding and cross-lingual translation. Its core innovation lies in emulating “bilingual cognitive advantage” via a structured prompting mechanism that synchronously activates and preserves OCR capability during translation training. Experiments on multiple DIMT benchmarks demonstrate that SSCT significantly improves translation quality (BLEU +3.2) while maintaining or even enhancing OCR accuracy (CER −1.8%), effectively mitigating multi-task interference and enabling synergistic generalization across modalities and tasks.
📝 Abstract
Multimodal Large Language Models (MLLMs) have shown strong performance in document image tasks, especially Optical Character Recognition (OCR). However, they struggle with Document Image Machine Translation (DIMT), which requires handling both cross-modal and cross-lingual challenges. Previous efforts to enhance DIMT capability through Supervised Fine-Tuning (SFT) on the DIMT dataset often result in the forgetting of the model's existing monolingual abilities, such as OCR. To address these challenges, we introduce a novel fine-tuning paradigm, named Synchronously Self-Reviewing (SSR) its OCR proficiency, inspired by the concept "Bilingual Cognitive Advantage". Specifically, SSR prompts the model to generate OCR text before producing translation text, which allows the model to leverage its strong monolingual OCR ability while learning to translate text across languages. Comprehensive experiments demonstrate the proposed SSR learning helps mitigate catastrophic forgetting, improving the generalization ability of MLLMs on both OCR and DIMT tasks.