Scalable Multilingual Multimodal Machine Translation with Speech-Text Fusion

πŸ“… 2026-02-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes a speech-guided machine translation (SMT) framework that addresses the limited scalability of existing image-guided multimodal machine translation due to the scarcity of multilingual image–text paired data. Departing from prior approaches, SMT leverages speech as the primary multimodal signal, integrating textual input with synthesized speech into a multimodal large language model. To reduce reliance on low-resource authentic speech data, the framework incorporates a self-evolution mechanism that iteratively refines the model through positive sample selection. Evaluated on the Multi30K benchmark, SMT achieves state-of-the-art performance, and further demonstrates superior average results across 108 translation directions in the FLORES-200 dataset. The findings confirm that the use of synthetic speech incurs negligible degradation in translation quality, enabling efficient and scalable multilingual machine translation.

Technology Category

Application Category

πŸ“ Abstract
Multimodal Large Language Models (MLLMs) have achieved notable success in enhancing translation performance by integrating multimodal information. However, existing research primarily focuses on image-guided methods, whose applicability is constrained by the scarcity of multilingual image-text pairs. The speech modality overcomes this limitation due to its natural alignment with text and the abundance of existing speech datasets, which enable scalable language coverage. In this paper, we propose a Speech-guided Machine Translation (SMT) framework that integrates speech and text as fused inputs into an MLLM to improve translation quality. To mitigate reliance on low-resource data, we introduce a Self-Evolution Mechanism. The core components of this framework include a text-to-speech model, responsible for generating synthetic speech, and an MLLM capable of classifying synthetic speech samples and iteratively optimizing itself using positive samples. Experimental results demonstrate that our framework surpasses all existing methods on the Multi30K multimodal machine translation benchmark, achieving new state-of-the-art results. Furthermore, on general machine translation datasets, particularly the FLORES-200, it achieves average state-of-the-art performance in 108 translation directions. Ablation studies on CoVoST-2 confirms that differences between synthetic and authentic speech have negligible impact on translation quality. The code and models are released at https://github.com/yxduir/LLM-SRT.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Machine Translation
Speech-Text Fusion
Multilingual Translation
Low-Resource Data
Scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speech-guided Machine Translation
Multimodal Large Language Model
Self-Evolution Mechanism
Speech-Text Fusion
Synthetic Speech
πŸ”Ž Similar Papers
No similar papers found.