π€ AI Summary
This study investigates the mechanisms and conditions under which multimodal inputs enhance automatic speech recognition (ASR) robustness in noisy environments. We propose a multimodal large language model (MLLM) that integrates Transformer and Mamba architectures to jointly model audio, lip-motion video, and textual context. Through systematic ablation studies, we identify four critical factors governing ASR robustness: modality synchronization, visual representation quality, input ordering, and loss weighting. Results show that lip-motion cues yield substantial gains under high noise, and high-fidelity visual encodings consistently improve recognition accuracy across signal-to-noise ratios (SNRs). Notably, we quantitatively characterize the performance degradation boundary induced by modality asynchrony at varying SNRsβthe first such analysis in multimodal ASR. Our work establishes an interpretable, synergistic modeling paradigm and provides concrete, empirically grounded optimization guidelines for robust multimodal ASR systems.
π Abstract
Recent advances in multi-modal large language models (MLLMs) have opened new possibilities for unified modeling of speech, text, images, and other modalities. Building on our prior work, this paper examines the conditions and model architectures under which multiple input modalities can improve automatic speech recognition (ASR) accuracy in noisy environments. Through experiments on synthetic and real-world data, we find that (1) harnessing more modalities usually improves ASR accuracy, as each modality provides complementary information, but the improvement depends on the amount of auditory noise. (2) Synchronized modalities (e.g., lip movements) are more useful at high noise levels whereas unsynchronized modalities (e.g., image context) are most helpful at moderate noise levels. (3) Higher-quality visual representations consistently improve ASR accuracy, highlighting the importance of developing more powerful visual encoders. (4) Mamba exhibits similar trends regarding the benefits of multimodality as do Transformers. (5) The input order of modalities as well as their weights in the loss function can significantly impact accuracy. These findings both offer practical insights and help to deepen our understanding of multi-modal speech recognition under challenging conditions.