🤖 AI Summary
This work addresses the challenge of strong coupling between speaker characteristics and linguistic content in speech by proposing DKSD-AE, a structured autoencoder that enables effective speaker verification without textual supervision. The method introduces, for the first time, multi-step Koopman operator learning into speaker representation modeling, combined with instance normalization to explicitly disentangle speaker- and content-related dynamics. Experimental results demonstrate that the model achieves or surpasses state-of-the-art performance across multiple benchmark datasets while significantly reducing parameter count. Furthermore, its high content confusion—as measured by elevated equal error rate (EER) under content-mismatched conditions—confirms its robust disentanglement capability and strong cross-scale generalization.
📝 Abstract
Human speech contains both linguistic content and speaker dependent characteristics making speaker verification a key technology in identity critical applications. Modern deep learning speaker verification systems aim to learn speaker representations that are invariant to semantic content and nuisance factors such as ambient noise. However, many existing approaches depend on labelled data, textual supervision or large pretrained models as feature extractors, limiting scalability and practical deployment, raising sustainability concerns. We propose Deep Koopman Speech Disentanglement Autoencoder (DKSD-AE), a structured autoencoder that combines a novel multi-step Koopman operator learning module with instance normalization to disentangle speaker and content dynamics. Quantitative experiments across multiple datasets demonstrate that DKSD-AE achieves improved or competitive speaker verification performance compared to state-of-the-art baselines while maintaining high content EER, confirming effective disentanglement. These results are obtained with substantially fewer parameters and without textual supervision. Moreover, performance remains stable under increased evaluation scale, highlighting representation robustness and generalization. Our findings suggest that Koopman-based temporal modelling, when combined with instance normalization, provides an efficient and principled solution for speaker-focused representation learning.