๐ค AI Summary
This work proposes a multimodal end-to-end neural beamforming framework that integrates visual and audio signals to address the limitations of conventional single-channel speech enhancement methods in challenging acoustic conditionsโsuch as low signal-to-noise ratios, high reverberation, overlapping speech, and non-stationary noise. For the first time, lip motion features extracted from a pretrained visual speech recognition model are leveraged to support voice activity detection and target speaker localization. An attention mechanism is employed to enable robust speech enhancement for both static and dynamic speakers. Experimental results demonstrate that the proposed approach significantly outperforms existing baselines across diverse complex scenarios, achieving substantial improvements in both enhancement quality and system robustness.
๐ Abstract
Recent studies have demonstrated that incorporating auxiliary information, such as speaker voiceprint or visual cues, can substantially improve Speech Enhancement (SE) performance. However, single-channel methods often yield suboptimal results in low signal-to-noise ratio (SNR) conditions, when there is high reverberation, or in complex scenarios involving dynamic speakers, overlapping speech, or non-stationary noise. To address these issues, we propose a novel Visual-Informed Neural Beamforming Network (VI-NBFNet), which integrates microphone array signal processing and deep neural networks (DNNs) using multimodal input features. The proposed network leverages a pretrained visual speech recognition model to extract lip movements as input features, which serve for voice activity detection (VAD) and target speaker identification. The system is intended to handle both static and moving speakers by introducing a supervised end-to-end beamforming framework equipped with an attention mechanism. The experimental results demonstrated that the proposed audiovisual system has achieved better SE performance and robustness for both stationary and dynamic speaker scenarios, compared to several baseline methods.