Discriminative-Generative Target Speaker Extraction with Decoder-Only Language Models

📅 2026-01-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the common trade-off in target speaker extraction, where aggressive suppression of interfering speakers often degrades speech perceptual quality and naturalness. To overcome this limitation, the authors propose a hybrid discriminative–generative framework: a discriminative front-end robustly extracts target speaker embeddings, while a novel decoder-only language model—introduced for the first time in this context—reconstructs high-fidelity speech in a neural audio codec space, either autoregressively or non-autoregressively. Through a two-stage architecture and a collaborative training strategy, the method effectively balances speech quality, intelligibility, and speaker consistency beyond conventional SI-SDR optimization, significantly enhancing both the naturalness of the generated speech and the overall system robustness.

Technology Category

Application Category

📝 Abstract
Target speaker extraction (TSE) aims to recover the speech signal of a desired speaker from a mixed audio recording, given a short enrollment utterance. Most existing TSE approaches are based on discriminative modeling paradigms. Although effective at suppressing interfering speakers, these methods often struggle to produce speech with high perceptual quality and naturalness. To address this limitation, we first propose LauraTSE, a generative TSE model built upon an auto-regressive decoder-only language model. However, purely generative approaches may suffer from hallucinations, content drift, and limited controllability, which may undermine their reliability in complex acoustic scenarios. To overcome these challenges, we further introduce a discriminative-generative TSE framework. In this framework, a discriminative front-end is employed to robustly extract the target speaker's speech, yielding stable and controllable intermediate representations. A generative back-end then operates in the neural audio codec representation space to reconstruct fine-grained speech details and enhance perceptual quality. This two-stage design effectively combines the robustness and controllability of discriminative models with the superior naturalness and quality enhancement capabilities of generative models. Moreover, we systematically investigate collaborative training strategies for the proposed framework, including freezing or fine-tuning the front-end, incorporating an auxiliary SI-SDR loss, and exploring both auto-regressive and non-auto-regressive inference mechanisms. Experimental results demonstrate that the proposed framework achieves a more favorable trade-off among speech quality, intelligibility, and speaker consistency.
Problem

Research questions and friction points this paper is trying to address.

Target Speaker Extraction
Speech Quality
Generative Models
Discriminative Models
Perceptual Naturalness
Innovation

Methods, ideas, or system contributions that make the work stand out.

target speaker extraction
discriminative-generative framework
decoder-only language model
neural audio codec
collaborative training
🔎 Similar Papers
No similar papers found.
Bang Zeng
Bang Zeng
Wuhan University | Duke Kunshan University
Target Speaker ExtractionPersonal Voice Activity Detection
Beilong Tang
Beilong Tang
North Carolina State University
Statistical Machine LearningTarget Speaker ExtractionSpeech Separation
W
Wang Xiang
School of Computer Science, Wuhan University, Wuhan 430072, China, and also with Suzhou Municipal Key Laboratory of Multimodal Intelligent Systems, Digital Innovation Research Center, Duke Kunshan University, Kunshan 215316, China
Ming Li
Ming Li
Professor, Duke Kunshan University
Speech ProcessingAudio ProcessingAffective ComputingBehavior Signal Processing