Inter-Speaker Relative Cues for Text-Guided Target Speech Extraction

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses target speaker extraction from overlapping speech. We propose a text-guided speech separation method leveraging relative speaker cues—departing from conventional paradigms that rely on fixed categorical attributes (e.g., gender or identity). Instead, we jointly model continuous and discrete relative cues—including age difference, pitch contrast, speaking order, spatial proximity, and language dissimilarity—as composable relational features. Our approach fine-tunes a WavLM Base encoder augmented with a CNN frontend, fuses multi-dimensional relative cues, and validates effectiveness via a Conv1d-based baseline. Experiments show that full cue fusion achieves optimal performance; gender and speaking-order cues exhibit strongest robustness under cross-lingual and highly reverberant conditions; and supplementary cues (pitch, loudness, spatial distance) significantly enhance separation quality in complex scenarios. The framework eliminates rigid category constraints, substantially improving generalization capability and efficiency of training data construction.

Technology Category

Application Category

📝 Abstract
We propose a novel approach that utilize inter-speaker relative cues for distinguishing target speakers and extracting their voices from mixtures. Continuous cues (e.g., temporal order, age, pitch level) are grouped by relative differences, while discrete cues (e.g., language, gender, emotion) retain their categories. Relative cues offers greater flexibility than fixed speech attribute classification, facilitating much easier expansion of text-guided target speech extraction datasets. Our experiments show that combining all relative cues yields better performance than random subsets, with gender and temporal order being the most robust across languages and reverberant conditions. Additional cues like pitch level, loudness, distance, speaking duration, language, and pitch range also demonstrate notable benefit in complex scenarios. Fine-tuning pre-trained WavLM Base+ CNN encoders improves overall performance over the baseline of using only a Conv1d encoder.
Problem

Research questions and friction points this paper is trying to address.

Distinguishing target speakers using inter-speaker relative cues
Extracting voices from mixtures with flexible text-guided cues
Improving performance via combined cues and fine-tuned encoders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes inter-speaker relative cues
Groups continuous and discrete cues
Fine-tunes pre-trained WavLM encoders
🔎 Similar Papers
No similar papers found.