MLLM-based Speech Recognition: When and How is Multimodality Beneficial?

πŸ“… 2025-07-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates the mechanisms and conditions under which multimodal inputs enhance automatic speech recognition (ASR) robustness in noisy environments. We propose a multimodal large language model (MLLM) that integrates Transformer and Mamba architectures to jointly model audio, lip-motion video, and textual context. Through systematic ablation studies, we identify four critical factors governing ASR robustness: modality synchronization, visual representation quality, input ordering, and loss weighting. Results show that lip-motion cues yield substantial gains under high noise, and high-fidelity visual encodings consistently improve recognition accuracy across signal-to-noise ratios (SNRs). Notably, we quantitatively characterize the performance degradation boundary induced by modality asynchrony at varying SNRsβ€”the first such analysis in multimodal ASR. Our work establishes an interpretable, synergistic modeling paradigm and provides concrete, empirically grounded optimization guidelines for robust multimodal ASR systems.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in multi-modal large language models (MLLMs) have opened new possibilities for unified modeling of speech, text, images, and other modalities. Building on our prior work, this paper examines the conditions and model architectures under which multiple input modalities can improve automatic speech recognition (ASR) accuracy in noisy environments. Through experiments on synthetic and real-world data, we find that (1) harnessing more modalities usually improves ASR accuracy, as each modality provides complementary information, but the improvement depends on the amount of auditory noise. (2) Synchronized modalities (e.g., lip movements) are more useful at high noise levels whereas unsynchronized modalities (e.g., image context) are most helpful at moderate noise levels. (3) Higher-quality visual representations consistently improve ASR accuracy, highlighting the importance of developing more powerful visual encoders. (4) Mamba exhibits similar trends regarding the benefits of multimodality as do Transformers. (5) The input order of modalities as well as their weights in the loss function can significantly impact accuracy. These findings both offer practical insights and help to deepen our understanding of multi-modal speech recognition under challenging conditions.
Problem

Research questions and friction points this paper is trying to address.

When does multimodality improve speech recognition accuracy?
How do synchronized and unsynchronized modalities aid in noisy environments?
What factors affect the impact of visual representations on ASR?
Innovation

Methods, ideas, or system contributions that make the work stand out.

MLLMs unify speech, text, and image modalities
Synchronized modalities aid ASR in high noise
Better visual encoders boost ASR accuracy
πŸ”Ž Similar Papers
No similar papers found.
Y
Yiwen Guan
Worcester Polytechnic Institute, MA 01609, US
Viet Anh Trinh
Viet Anh Trinh
Nvidia
Automatic Speech RecognitionSpeech EnhancementMachine LearningSpeech Processing
V
Vivek Voleti
Worcester Polytechnic Institute, MA 01609, US
Jacob Whitehill
Jacob Whitehill
Worcester Polytechnic Institute
Artificial Intelligence