Koopman Regularized Deep Speech Disentanglement for Speaker Verification

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of strong coupling between speaker characteristics and linguistic content in speech by proposing DKSD-AE, a structured autoencoder that enables effective speaker verification without textual supervision. The method introduces, for the first time, multi-step Koopman operator learning into speaker representation modeling, combined with instance normalization to explicitly disentangle speaker- and content-related dynamics. Experimental results demonstrate that the model achieves or surpasses state-of-the-art performance across multiple benchmark datasets while significantly reducing parameter count. Furthermore, its high content confusion—as measured by elevated equal error rate (EER) under content-mismatched conditions—confirms its robust disentanglement capability and strong cross-scale generalization.

Technology Category

Application Category

📝 Abstract
Human speech contains both linguistic content and speaker dependent characteristics making speaker verification a key technology in identity critical applications. Modern deep learning speaker verification systems aim to learn speaker representations that are invariant to semantic content and nuisance factors such as ambient noise. However, many existing approaches depend on labelled data, textual supervision or large pretrained models as feature extractors, limiting scalability and practical deployment, raising sustainability concerns. We propose Deep Koopman Speech Disentanglement Autoencoder (DKSD-AE), a structured autoencoder that combines a novel multi-step Koopman operator learning module with instance normalization to disentangle speaker and content dynamics. Quantitative experiments across multiple datasets demonstrate that DKSD-AE achieves improved or competitive speaker verification performance compared to state-of-the-art baselines while maintaining high content EER, confirming effective disentanglement. These results are obtained with substantially fewer parameters and without textual supervision. Moreover, performance remains stable under increased evaluation scale, highlighting representation robustness and generalization. Our findings suggest that Koopman-based temporal modelling, when combined with instance normalization, provides an efficient and principled solution for speaker-focused representation learning.
Problem

Research questions and friction points this paper is trying to address.

speaker verification
speech disentanglement
content invariance
unsupervised representation learning
Koopman operator
Innovation

Methods, ideas, or system contributions that make the work stand out.

Koopman operator
speech disentanglement
speaker verification
instance normalization
self-supervised learning
🔎 Similar Papers
No similar papers found.
N
Nikos Chazaridis
School of Electronics and Computer Science, University of Southampton, SO17 1BJ Southampton, U.K.
M
Mohammad Belal
School of Electronics and Computer Science, University of Southampton, SO17 1BJ Southampton, U.K.
Rafael Mestre
Rafael Mestre
Lecturer (Assistant Professor), University of Southampton
Emerging technologiesComputational Social ScienceMultimodal machine learning
T
Timothy J. Norman
School of Electronics and Computer Science, University of Southampton, SO17 1BJ Southampton, U.K.
Christine Evers
Christine Evers
University of Southampton
Machine ListeningRobot AuditionBayesian InferenceStatistical Machine Learning