Towards the Next Frontier in Speech Representation Learning Using Disentanglement

📅 2024-07-02
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-supervised speech representation learning primarily focuses on frame-level masked prediction, neglecting utterance-level invariant coarse-grained factors—such as speaker identity and channel characteristics—leading to entanglement between fine-grained semantic and coarse-grained non-semantic features. To address this, we propose Learn2Diss, the first self-supervised framework that explicitly disentangles frame-level pseudo-phoneme representations from utterance-level pseudo-speaker representations. It employs a dual-encoder co-training mechanism integrating masked speech reconstruction, contrastive utterance embedding learning, and mutual information minimization—estimated via Jensen–Shannon divergence or MINE. Mutual information constraints enforce feature orthogonality. Experiments demonstrate state-of-the-art performance: +2.1% relative WER reduction on ASR and speech understanding (semantic tasks), and 18.7% EER reduction on speaker verification and channel identification (non-semantic tasks).

Technology Category

Application Category

📝 Abstract
The popular frameworks for self-supervised learning of speech representations have largely focused on frame-level masked prediction of speech regions. While this has shown promising downstream task performance for speech recognition and related tasks, this has largely ignored factors of speech that are encoded at coarser level, like characteristics of the speaker or channel that remain consistent through-out a speech utterance. In this work, we propose a framework for Learning Disentangled Self Supervised (termed as Learn2Diss) representations of speech, which consists of frame-level and an utterance-level encoder modules. The two encoders are initially learned independently, where the frame-level model is largely inspired by existing self supervision techniques, thereby learning pseudo-phonemic representations, while the utterance-level encoder is inspired by constrastive learning of pooled embeddings, thereby learning pseudo-speaker representations. The joint learning of these two modules consists of disentangling the two encoders using a mutual information based criterion. With several downstream evaluation experiments, we show that the proposed Learn2Diss achieves state-of-the-art results on a variety of tasks, with the frame-level encoder representations improving semantic tasks, while the utterance-level representations improve non-semantic tasks.
Problem

Research questions and friction points this paper is trying to address.

Improving speech representation learning by disentangling frame-level and utterance-level factors
Addressing limitations of current self-supervised frameworks ignoring speaker/channel characteristics
Enhancing both semantic and non-semantic tasks through disentangled encoders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines frame-level and utterance-level encoders
Uses mutual information for disentanglement
Improves both semantic and non-semantic tasks
🔎 Similar Papers
No similar papers found.
V
Varun Krishna
Indian Institute of Science, Bengaluru
S
Sriram Ganapathy
Indian Institute of Science, Bengaluru