Visual-Aware Speech Recognition for Noisy Scenarios

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automatic speech recognition (ASR) and audio-visual speech recognition (AVSR) models often fail to leverage visual cues robustly under noisy conditions, especially when lip movements are occluded or unavailable. Method: This paper proposes a noise-aware and disentangled visual modeling framework grounded in generalized scene-level visual information. It is the first to exploit non-lip visual cues—such as background scene, illumination, and motion—to explicitly model noise sources without requiring frontal speaker visibility. We introduce an audio-visual collaborative multi-head attention bridging mechanism for end-to-end joint prediction of transcriptions and noise labels. Leveraging a pre-trained audio-visual encoder, we employ a scalable audio-visual pairing pipeline to ensure strong visual-noise correlation. Results: On multi-noise benchmarks, our method reduces word error rate (WER) by 23.6% relative to audio-only ASR, demonstrating that scene-level visual priors play a critical role in speech enhancement and noise-robust recognition.

Technology Category

Application Category

📝 Abstract
Humans have the ability to utilize visual cues, such as lip movements and visual scenes, to enhance auditory perception, particularly in noisy environments. However, current Automatic Speech Recognition (ASR) or Audio-Visual Speech Recognition (AVSR) models often struggle in noisy scenarios. To solve this task, we propose a model that improves transcription by correlating noise sources to visual cues. Unlike works that rely on lip motion and require the speaker's visibility, we exploit broader visual information from the environment. This allows our model to naturally filter speech from noise and improve transcription, much like humans do in noisy scenarios. Our method re-purposes pretrained speech and visual encoders, linking them with multi-headed attention. This approach enables the transcription of speech and the prediction of noise labels in video inputs. We introduce a scalable pipeline to develop audio-visual datasets, where visual cues correlate to noise in the audio. We show significant improvements over existing audio-only models in noisy scenarios. Results also highlight that visual cues play a vital role in improved transcription accuracy.
Problem

Research questions and friction points this paper is trying to address.

Improving speech recognition in noisy environments using visual cues
Leveraging environmental visual information to filter noise from speech
Developing scalable audio-visual datasets for enhanced transcription accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linking speech and visual encoders with multi-headed attention
Exploiting broader visual information from the environment
Scalable pipeline for audio-visual dataset development
🔎 Similar Papers
No similar papers found.