Interpolating Speaker Identities in Embedding Space for Data Expansion

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenges of limited real-speaker data acquisition and privacy sensitivity in speaker verification, this paper proposes INSIDE: a zero-shot data augmentation method that synthesizes novel, semantically continuous speaker identities via spherical linear interpolation (slerp) within a pretrained speaker embedding space, then drives a text-to-speech (TTS) system to generate corresponding speech waveforms. Crucially, INSIDE requires no real utterances or speaker annotations, ensuring strong privacy preservation and scalability, while remaining compatible with other data augmentation techniques. Experiments demonstrate that INSIDE reduces equal error rate (EER) by 3.06–5.24% relatively on speaker verification benchmarks and improves accuracy by up to 13.44% on cross-task gender classification—validating its effectiveness, generalizability, and robustness across downstream tasks.

Technology Category

Application Category

📝 Abstract
The success of deep learning-based speaker verification systems is largely attributed to access to large-scale and diverse speaker identity data. However, collecting data from more identities is expensive, challenging, and often limited by privacy concerns. To address this limitation, we propose INSIDE (Interpolating Speaker Identities in Embedding Space), a novel data expansion method that synthesizes new speaker identities by interpolating between existing speaker embeddings. Specifically, we select pairs of nearby speaker embeddings from a pretrained speaker embedding space and compute intermediate embeddings using spherical linear interpolation. These interpolated embeddings are then fed to a text-to-speech system to generate corresponding speech waveforms. The resulting data is combined with the original dataset to train downstream models. Experiments show that models trained with INSIDE-expanded data outperform those trained only on real data, achieving 3.06% to 5.24% relative improvements. While INSIDE is primarily designed for speaker verification, we also validate its effectiveness on gender classification, where it yields a 13.44% relative improvement. Moreover, INSIDE is compatible with other augmentation techniques and can serve as a flexible, scalable addition to existing training pipelines.
Problem

Research questions and friction points this paper is trying to address.

Synthesizing new speaker identities via embedding interpolation
Addressing data scarcity in speaker verification systems
Generating artificial training data to enhance model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interpolating speaker embeddings for data synthesis
Using spherical linear interpolation for identity creation
Generating speech via text-to-speech from embeddings
🔎 Similar Papers
No similar papers found.