🤖 AI Summary
Children’s speech exhibits high acoustic and linguistic variability, causing significant performance degradation in existing automatic speech recognition (ASR) systems—particularly self-supervised learning (SSL)-based models—under zero-shot conditions. This work systematically analyzes hierarchical representations of SSL models (Wav2Vec2, HuBERT, Data2Vec, and WavLM) and identifies that Wav2Vec2’s 22nd-layer features achieve optimal zero-shot adaptation to children’s speech without fine-tuning. These features are fed into a lightweight DNN-ASR system built within the Kaldi framework. On the PFSTAR corpus, the approach achieves a word error rate (WER) of 5.15%, outperforming the baseline decoding by 51.64%. Consistent improvements across age groups and strong generalization are further validated on the CMU Kids dataset. To our knowledge, this is the first study to empirically demonstrate the zero-shot transfer effectiveness of deep SSL features for children’s speech recognition, establishing a reusable, fine-tuning-free feature extraction paradigm for low-resource child ASR.
📝 Abstract
Automatic Speech Recognition (ASR) systems often struggle to accurately process children's speech due to its distinct and highly variable acoustic and linguistic characteristics. While recent advancements in self-supervised learning (SSL) models have greatly enhanced the transcription of adult speech, accurately transcribing children's speech remains a significant challenge. This study investigates the effectiveness of layer-wise features extracted from state-of-the-art SSL pre-trained models - specifically, Wav2Vec2, HuBERT, Data2Vec, and WavLM in improving the performance of ASR for children's speech in zero-shot scenarios. A detailed analysis of features extracted from these models was conducted, integrating them into a simplified DNN-based ASR system using the Kaldi toolkit. The analysis identified the most effective layers for enhancing ASR performance on children's speech in a zero-shot scenario, where WSJCAM0 adult speech was used for training and PFSTAR children speech for testing. Experimental results indicated that Layer 22 of the Wav2Vec2 model achieved the lowest Word Error Rate (WER) of 5.15%, representing a 51.64% relative improvement over the direct zero-shot decoding using Wav2Vec2 (WER of 10.65%). Additionally, age group-wise analysis demonstrated consistent performance improvements with increasing age, along with significant gains observed even in younger age groups using the SSL features. Further experiments on the CMU Kids dataset confirmed similar trends, highlighting the generalizability of the proposed approach.