🤖 AI Summary
This work addresses the degradation of semantic information in self-supervised speech representations under noisy conditions, where existing adaptation modules often preserve acoustic details at the expense of linguistic content during joint training. To mitigate this issue, the authors propose a decoupled semantic aggregation strategy grounded in phoneme mutual information. Specifically, a pre-trained and frozen language aggregation layer is employed to explicitly maximize the mutual information between learned representations and phoneme labels, thereby effectively preserving linguistic content during speech enhancement. Integrating information-theoretic measures, a dynamic aggregation mechanism, and a decoupled training framework, the proposed method significantly reduces word error rate (WER) and outperforms end-to-end jointly optimized baselines.
📝 Abstract
Recent speech enhancement (SE) models increasingly leverage self-supervised learning (SSL) representations for their rich semantic information. Typically, intermediate features are aggregated into a single representation via a lightweight adaptation module. However, most SSL models are not trained for noise robustness, which can lead to corrupted semantic representations. Moreover, the adaptation module is trained jointly with the SE model, potentially prioritizing acoustic details over semantic information, contradicting the original purpose. To address this issue, we first analyze the behavior of SSL models on noisy speech from an information-theoretic perspective. Specifically, we measure the mutual information (MI) between the corrupted SSL representations and the corresponding phoneme labels, focusing on preservation of linguistic contents. Building upon this analysis, we introduce the linguistic aggregation layer, which is pre-trained to maximize MI with phoneme labels (with optional dynamic aggregation) and then frozen during SE training. Experiments show that this decoupled approach improves Word Error Rate (WER) over jointly optimized baselines, demonstrating the benefit of explicitly aligning the adaptation module with linguistic contents.