Contextual Speech Extraction: Leveraging Textual History as an Implicit Cue for Target Speech Extraction

📅 2025-03-11
🏛️ IEEE International Conference on Acoustics, Speech, and Signal Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper introduces Contextual Speech Extraction (CSE), a novel paradigm for speaker-targeted speech separation that relies solely on conversational text history to implicitly localize and isolate the target speaker’s speech stream—without requiring pre-enrolled voice samples, facial video, spatial cues, or other conventional supervision. Methodologically, it is the first to employ pure textual context as the sole supervisory signal, proposing an end-to-end Transformer-based waveform separation model. The architecture integrates cross-modal attention and multi-task joint training, enabling flexible inference in either unimodal (text-only) or bimodal (text + mixture speech) settings. Experiments demonstrate that merely two turns of contextual text achieve over 90% accuracy in identifying the target speech stream. The approach is validated across three benchmark datasets, confirming its effectiveness in mobile voice-messaging scenarios. Code and illustrative examples are publicly released.

Technology Category

Application Category

📝 Abstract
In this paper, we investigate a novel approach for Target Speech Extraction (TSE), which relies solely on textual context to extract the target speech. We refer to this task as Contextual Speech Extraction (CSE). Unlike traditional TSE methods that rely on pre-recorded enrollment utterances, video of the target speaker's face, spatial information, or other explicit cues to identify the target stream, our proposed method requires only a few turns of previous dialogue (or monologue) history. This approach is naturally feasible in mobile messaging environments where voice recordings are typically preceded by textual dialogue that can be leveraged implicitly. We present three CSE models and analyze their performances on three datasets. Through our experiments, we demonstrate that even when the model relies purely on dialogue history, it can achieve over 90 % accuracy in identifying the correct target stream with only two previous dialogue turns. Furthermore, we show that by leveraging both textual context and enrollment utterances as cues during training, we further enhance our model's flexibility and effectiveness, allowing us to use either cue during inference, or combine both for improved performance. Samples and code available on https://miraodasilva.github.io/cse-project-page .
Problem

Research questions and friction points this paper is trying to address.

Extract target speech using only textual context.
Improve speech extraction accuracy with dialogue history.
Combine textual and enrollment cues for better performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses textual dialogue history for speech extraction
Achieves high accuracy without explicit cues
Combines textual context and enrollment for flexibility
🔎 Similar Papers
No similar papers found.