🤖 AI Summary
Children’s automatic speech recognition (ASR) faces dual challenges: high acoustic variability and scarce labeled data. This work systematically investigates the impact of training paradigms, data composition, and model scale. We identify an inherent adult speech bias in mainstream self-supervised learning (SSL) representations (e.g., WavLM, XEUS), and demonstrate that flat-start training significantly mitigates this bias. Empirical analysis shows performance saturation beyond ~1B parameters, indicating diminishing returns from scaling. Crucially, incorporating open children’s speech data substantially improves generalization. Within the ESPnet framework, we comparatively evaluate fine-tuning versus from-scratch training across multiple datasets, finding the latter consistently more robust. Our key contributions are: (1) uncovering the mechanism underlying adult bias in SSL representations; (2) establishing optimal training strategies for children’s ASR; (3) quantifying the marginal utility of model scaling; and (4) releasing a reproducible, open-source benchmark.
📝 Abstract
Despite advancements in ASR, child speech recognition remains challenging due to acoustic variability and limited annotated data. While fine-tuning adult ASR models on child speech is common, comparisons with flat-start training remain underexplored. We compare flat-start training across multiple datasets, SSL representations (WavLM, XEUS), and decoder architectures. Our results show that SSL representations are biased toward adult speech, with flat-start training on child speech mitigating these biases. We also analyze model scaling, finding consistent improvements up to 1B parameters, beyond which performance plateaus. Additionally, age-related ASR and speaker verification analysis highlights the limitations of proprietary models like Whisper, emphasizing the need for open-data models for reliable child speech research. All investigations are conducted using ESPnet, and our publicly available benchmark provides insights into training strategies for robust child speech processing.