🤖 AI Summary
Target speaker confusion—i.e., erroneous extraction of non-target speaker speech—remains a core challenge in end-to-end speaker extraction (E2E-SE). To address this, we propose a lightweight time-domain augmentation strategy that generates diverse pseudo-speakers via time-domain resampling and amplitude rescaling, preserving speaker identity while enhancing embedding generalizability and discriminability. This is the first work to introduce identity-aware time-domain augmentation into the E2E-SE framework, deliberately increasing discrimination difficulty without compromising speech fidelity, thereby constructing a more robust speaker embedding space. Jointly optimized with metric learning, our method achieves SI-SNRi improvements of 1.2–2.1 dB over strong baselines on WSJ0-2Mix and LibriMix, demonstrating significant mitigation of target speaker confusion.
📝 Abstract
Target confusion, defined as occasional switching to non-target speakers, poses a key challenge for end-to-end speaker extraction (E2E-SE) systems. We argue that this problem is largely caused by the lack of generalizability and discrimination of the speaker embeddings, and introduce a simple yet effective speaker augmentation strategy to tackle the problem. Specifically, we propose a time-domain resampling and rescaling pipeline that alters speaker traits while preserving other speech properties. This generates a variety of pseudo-speakers to help establish a generalizable speaker embedding space, while the speaker-trait-specific augmentation creates hard samples that force the model to focus on genuine speaker characteristics. Experiments on WSJ0-2Mix and LibriMix show that our method mitigates the target confusion and improves extraction performance. Moreover, it can be combined with metric learning, another effective approach to address target confusion, leading to further gains.