🤖 AI Summary
Existing self-supervised speech representations (e.g., WavLM) struggle to fully disentangle speaker identity from linguistic content, thereby degrading downstream task performance. To address this, we propose a lightweight, interpretable linear decomposition framework: leveraging learnable projections with orthogonality constraints, WavLM features are explicitly decomposed into speaker-dependent and speaker-independent components; these are jointly optimized via speaker discrimination loss and content reconstruction objective. Crucially, our method requires no architectural complexity or auxiliary annotations, and—uniquely—achieves *exact*, *lossless*, and *purely linear* speaker disentanglement. Evaluated on voice conversion, it substantially surpasses state-of-the-art methods: speaker similarity decreases by 62%, while speech quality (MOS) and content accuracy (WER) both improve significantly. Inference overhead is negligible.
📝 Abstract
Self-supervised learning (SSL) has reduced the reliance on expensive labeling in speech technologies by learning meaningful representations from unannotated data. Since most SSL-based downstream tasks prioritize content information in speech, ideal representations should disentangle content from unwanted variations like speaker characteristics in the SSL representations. However, removing speaker information often degrades other speech components, and existing methods either fail to fully disentangle speaker identity or require resource-intensive models. In this paper, we propose a novel disentanglement method that linearly decomposes SSL representations into speaker-specific and speaker-independent components, effectively generating speaker disentangled representations. Comprehensive experiments show that our approach achieves speaker independence and as such, when applied to content-driven tasks such as voice conversion, our representations yield significant improvements over state-of-the-art methods.