🤖 AI Summary
Existing audio-visual target speech extraction (AV-TSE) models rely heavily on visual cues while neglecting linguistic priors—such as syntax and semantics—thereby limiting speech quality and intelligibility. To address this, we propose the first end-to-end AV-TSE framework that incorporates syntactic and semantic representations from pretrained speech-language models (PSLMs) or language models (PLMs) as lightweight, inference-free auxiliary supervision signals. Our method introduces no additional computational overhead during inference and maintains robust performance under degraded visual conditions or in multilingual settings. Experiments demonstrate that the proposed linguistic constraints consistently improve speech quality (measured by SI-SNR↑) and intelligibility (measured by WER↓) across diverse benchmarks. This work establishes a generalizable, language-driven paradigm for AV-TSE, bridging the gap between multimodal speech processing and structured linguistic knowledge without compromising efficiency.
📝 Abstract
Audio-visual target speaker extraction (AV-TSE) models primarily rely on target visual cues to isolate the target speaker's voice from others. We know that humans leverage linguistic knowledge, such as syntax and semantics, to support speech perception. Inspired by this, we explore the potential of pre-trained speech-language models (PSLMs) and pre-trained language models (PLMs) as auxiliary knowledge sources for AV-TSE. In this study, we propose incorporating the linguistic constraints from PSLMs or PLMs for the AV-TSE model as additional supervision signals. Without introducing any extra computational cost during inference, the proposed approach consistently improves speech quality and intelligibility. Furthermore, we evaluate our method in multi-language settings and visual cue-impaired scenarios and show robust performance gains.