๐ค AI Summary
To address the performance limitations of foundation models in low-resource speech tasksโsuch as child automatic speech recognition (Child ASR)โthis paper proposes a knowledge transfer framework based on model merging. The core innovation is Selective Attention (SA)Merge, a fine-grained fusion method that selectively integrates task-relevant attention vectors from a high-resource pretrained model (e.g., Whisper) and a low-resource fine-tuned model at the attention mechanism level. Integrated with SpecAugment and fine-tuning of Whisper-small, our approach achieves a 14% relative reduction in word error rate (WER) on the MyST child speech corpus, yielding an absolute WER of 8.69โthe current state-of-the-art. Notably, this work is the first to incorporate interpretability analysis of attention matrices into model merging, thereby significantly enhancing generalization and robustness in low-resource settings.
๐ Abstract
While Speech Foundation Models (SFMs) excel in various speech tasks, their performance for low-resource tasks such as child Automatic Speech Recognition (ASR) is hampered by limited pretraining data. To address this, we explore different model merging techniques to leverage knowledge from models trained on larger, more diverse speech corpora. This paper also introduces Selective Attention (SA) Merge, a novel method that selectively merges task vectors from attention matrices to enhance SFM performance on low-resource tasks. Experiments on the MyST database show significant reductions in relative word error rate of up to 14%, outperforming existing model merging and data augmentation techniques. By combining data augmentation techniques with SA Merge, we achieve a new state-of-the-art WER of 8.69 on the MyST database for the Whisper-small model, highlighting the potential of SA Merge for improving low-resource ASR.