Real-Time System for Audio-Visual Target Speech Enhancement

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the severe degradation of target speech in single-channel audio by environmental noise and interfering speakers—and the limited robustness of conventional audio-only methods—this paper proposes the first interactive audio-visual speech enhancement system capable of real-time operation on commodity CPUs. The system jointly processes raw single-channel audio and lip-motion visual features extracted from a pre-trained audio-visual speech recognition model, employing an end-to-end architecture for speech separation and enhancement. It requires no GPU acceleration, supports synchronous real-time input from microphone and camera, and delivers enhanced speech directly to headphones with minimal latency. Experiments demonstrate consistent performance across diverse realistic noise conditions and multi-talker interference scenarios, exhibiting strong generalization and significantly outperforming audio-only baselines. This work bridges a critical gap by introducing a lightweight, interactive, and CPU-efficient audio-visual speech enhancement solution.

Technology Category

Application Category

📝 Abstract
We present a live demonstration for RAVEN, a real-time audio-visual speech enhancement system designed to run entirely on a CPU. In single-channel, audio-only settings, speech enhancement is traditionally approached as the task of extracting clean speech from environmental noise. More recent work has explored the use of visual cues, such as lip movements, to improve robustness, particularly in the presence of interfering speakers. However, to our knowledge, no prior work has demonstrated an interactive system for real-time audio-visual speech enhancement operating on CPU hardware. RAVEN fills this gap by using pretrained visual embeddings from an audio-visual speech recognition model to encode lip movement information. The system generalizes across environmental noise, interfering speakers, transient sounds, and even singing voices. In this demonstration, attendees will be able to experience live audio-visual target speech enhancement using a microphone and webcam setup, with clean speech playback through headphones.
Problem

Research questions and friction points this paper is trying to address.

Developing real-time speech enhancement using visual lip movement cues
Creating CPU-based system for audio-visual target speech separation
Generalizing enhancement across noise, speakers, and singing voices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-time audio-visual speech enhancement on CPU
Uses pretrained visual embeddings from recognition model
Generalizes across noise, speakers, and singing voices
🔎 Similar Papers
No similar papers found.
T
T. Aleksandra Ma
Bose Corporation, Framingham, USA
S
Sile Yin
Bose Corporation, Framingham, USA
Li-Chia Yang
Li-Chia Yang
Bose Corporation
Deep LearningMusic Information RetrievalSpeech Enhancement
S
Shuo Zhang
Bose Corporation, Framingham, USA