🤖 AI Summary
Current speech translation (ST) relies on cascaded ASR+MT pipelines, resulting in high latency and inability to leverage multimodal contextual cues (e.g., visual information) for disambiguation; meanwhile, multimodal foundation models (MMFMs) lack translation-specific capabilities, while large translation models lack multimodal perception. To address these limitations, we propose an end-to-end multilingual multimodal translation framework. Our approach employs a modular fusion architecture that jointly integrates Omni 2.5–7B (a multimodal encoder) and SeedX PPO-7B (a translation decoder), augmented by a novel cross-modal hidden-state fusion mechanism enabling unified modeling of audio, image, and text inputs. Evaluated on SimulST, our method reduces latency by approximately one second, significantly improves semantic disambiguation and translation quality, and supports diverse translation scenarios—including speech-to-text and image-text-to-text.
📝 Abstract
There has been significant progress in open-source text-only translation large language models (LLMs) with better language coverage and quality. However, these models can be only used in cascaded pipelines for speech translation (ST), performing automatic speech recognition first followed by translation. This introduces additional latency, which is particularly critical in simultaneous ST (SimulST), and prevents the model from exploiting multimodal context, such as images, which can aid disambiguation. Pretrained multimodal foundation models (MMFMs) already possess strong perception and reasoning capabilities across multiple modalities, but generally lack the multilingual coverage and specialized translation performance of dedicated translation LLMs. To build an effective multimodal translation system, we propose an end-to-end approach that fuses MMFMs with translation LLMs. We introduce a novel fusion strategy that connects hidden states from multiple layers of a pretrained MMFM to a translation LLM, enabling joint end-to-end training. The resulting model, OmniFusion, built on Omni 2.5-7B as the MMFM and SeedX PPO-7B as the translation LLM, can perform speech-to-text, speech-and-image-to-text, and text-and-image-to-text translation. Experiments demonstrate that OmniFusion effectively leverages both audio and visual inputs, achieves a 1-second latency reduction in SimulST compared to cascaded pipelines and also improves the overall translation qualityfootnote{Code is available at https://github.com/saikoneru/OmniFusion}.