Visual-Informed Speech Enhancement Using Attention-Based Beamforming

๐Ÿ“… 2026-03-05
๐Ÿ›๏ธ IEEE Transactions on Audio, Speech, and Language Processing
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes a multimodal end-to-end neural beamforming framework that integrates visual and audio signals to address the limitations of conventional single-channel speech enhancement methods in challenging acoustic conditionsโ€”such as low signal-to-noise ratios, high reverberation, overlapping speech, and non-stationary noise. For the first time, lip motion features extracted from a pretrained visual speech recognition model are leveraged to support voice activity detection and target speaker localization. An attention mechanism is employed to enable robust speech enhancement for both static and dynamic speakers. Experimental results demonstrate that the proposed approach significantly outperforms existing baselines across diverse complex scenarios, achieving substantial improvements in both enhancement quality and system robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent studies have demonstrated that incorporating auxiliary information, such as speaker voiceprint or visual cues, can substantially improve Speech Enhancement (SE) performance. However, single-channel methods often yield suboptimal results in low signal-to-noise ratio (SNR) conditions, when there is high reverberation, or in complex scenarios involving dynamic speakers, overlapping speech, or non-stationary noise. To address these issues, we propose a novel Visual-Informed Neural Beamforming Network (VI-NBFNet), which integrates microphone array signal processing and deep neural networks (DNNs) using multimodal input features. The proposed network leverages a pretrained visual speech recognition model to extract lip movements as input features, which serve for voice activity detection (VAD) and target speaker identification. The system is intended to handle both static and moving speakers by introducing a supervised end-to-end beamforming framework equipped with an attention mechanism. The experimental results demonstrated that the proposed audiovisual system has achieved better SE performance and robustness for both stationary and dynamic speaker scenarios, compared to several baseline methods.
Problem

Research questions and friction points this paper is trying to address.

Speech Enhancement
Low SNR
Dynamic Speakers
Overlapping Speech
Non-stationary Noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

visual-informed speech enhancement
neural beamforming
attention mechanism
multimodal fusion
lip movement features
๐Ÿ”Ž Similar Papers
No similar papers found.
C
Chihyun Liu
Department of Power Mechanical Engineering, National Tsing Hua University, Hsinchu, Taiwan
J
Jiaxuan Fan
Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan
M
Mingtung Sun
Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan
M
Michael Anthony
Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan
M
Mingsian R. Bai
Department of Power Mechanical Engineering and Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan
Yu Tsao
Yu Tsao
Research Fellow (Professor), Deputy Director, CITI, Academia Sinica
Assistive Oral Communication TechnologiesSpeech EnhancementVoice ConversionSpeech Assessment