🤖 AI Summary
This work addresses the vulnerability of self-supervised speech representations to positional embedding interference during fine-tuning for speech enhancement, which often leads models to over-rely on positional cues rather than actual speech content. To mitigate this issue, the authors propose a position-invariant fine-tuning strategy that integrates speed perturbation with zero-padding and introduces a soft-DTW alignment loss to effectively decouple content from positional information. The proposed approach significantly improves speech enhancement performance under noisy conditions, accelerates model convergence, and yields superior results on downstream tasks, thereby demonstrating the effectiveness and practicality of position-invariant fine-tuning in leveraging self-supervised speech representations.
📝 Abstract
Integrating front-end speech enhancement (SE) models with self-supervised learning (SSL)-based speech models is effective for downstream tasks in noisy conditions. SE models are commonly fine-tuned using SSL representations with mean squared error (MSE) loss between enhanced and clean speech. However, MSE is prone to exploiting positional embeddings in SSL models, allowing the objective to be minimised through positional correlations instead of content-related information. This work frames the problem as a general limitation of self-supervised representation fine-tuning and investigates it through representation-guided SE. Two strategies are considered: (1) zero-padding, previously explored in SSL pre-training but here examined in the fine-tuning setting, and (2) speed perturbations with a soft-DTW loss. Experiments show that the soft-DTW-based approach achieves faster convergence and improved downstream performance, underscoring the importance of position-invariant fine-tuning in SSL-based speech modelling.