Incorporating Linguistic Constraints from External Knowledge Source for Audio-Visual Target Speech Extraction

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio-visual target speech extraction (AV-TSE) models rely heavily on visual cues while neglecting linguistic priors—such as syntax and semantics—thereby limiting speech quality and intelligibility. To address this, we propose the first end-to-end AV-TSE framework that incorporates syntactic and semantic representations from pretrained speech-language models (PSLMs) or language models (PLMs) as lightweight, inference-free auxiliary supervision signals. Our method introduces no additional computational overhead during inference and maintains robust performance under degraded visual conditions or in multilingual settings. Experiments demonstrate that the proposed linguistic constraints consistently improve speech quality (measured by SI-SNR↑) and intelligibility (measured by WER↓) across diverse benchmarks. This work establishes a generalizable, language-driven paradigm for AV-TSE, bridging the gap between multimodal speech processing and structured linguistic knowledge without compromising efficiency.

Technology Category

Application Category

📝 Abstract
Audio-visual target speaker extraction (AV-TSE) models primarily rely on target visual cues to isolate the target speaker's voice from others. We know that humans leverage linguistic knowledge, such as syntax and semantics, to support speech perception. Inspired by this, we explore the potential of pre-trained speech-language models (PSLMs) and pre-trained language models (PLMs) as auxiliary knowledge sources for AV-TSE. In this study, we propose incorporating the linguistic constraints from PSLMs or PLMs for the AV-TSE model as additional supervision signals. Without introducing any extra computational cost during inference, the proposed approach consistently improves speech quality and intelligibility. Furthermore, we evaluate our method in multi-language settings and visual cue-impaired scenarios and show robust performance gains.
Problem

Research questions and friction points this paper is trying to address.

Enhancing AV-TSE with linguistic constraints from pre-trained models
Improving speech quality and intelligibility without extra inference cost
Validating robustness in multi-language and visual-impaired scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes pre-trained speech-language models
Incorporates linguistic constraints as supervision
Enhances speech quality without extra cost
🔎 Similar Papers
No similar papers found.
Wenxuan Wu
Wenxuan Wu
Oregon State University; CASIA
computer visionPoint Clouds Processing
S
Shuai Wang
SRIBD, School of Data Science, The Chinese University of Hong Kong, Shenzhen, China; School of Intelligence Science and Technology, Nanjing University, Suzhou, China
Xixin Wu
Xixin Wu
The Chinese University of Hong Kong
H
Helen Meng
Department of SEEM, The Chinese University of Hong Kong, Hong Kong SAR, China
Haizhou Li
Haizhou Li
The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China; NUS, Singapore
Automatic Speech RecognitionSpeaker RecognitionLanguage RecognitionVoice ConversionMachine Translation