Benchmarking Training Paradigms, Dataset Composition, and Model Scaling for Child ASR in ESPnet

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Children’s automatic speech recognition (ASR) faces dual challenges: high acoustic variability and scarce labeled data. This work systematically investigates the impact of training paradigms, data composition, and model scale. We identify an inherent adult speech bias in mainstream self-supervised learning (SSL) representations (e.g., WavLM, XEUS), and demonstrate that flat-start training significantly mitigates this bias. Empirical analysis shows performance saturation beyond ~1B parameters, indicating diminishing returns from scaling. Crucially, incorporating open children’s speech data substantially improves generalization. Within the ESPnet framework, we comparatively evaluate fine-tuning versus from-scratch training across multiple datasets, finding the latter consistently more robust. Our key contributions are: (1) uncovering the mechanism underlying adult bias in SSL representations; (2) establishing optimal training strategies for children’s ASR; (3) quantifying the marginal utility of model scaling; and (4) releasing a reproducible, open-source benchmark.

Technology Category

Application Category

📝 Abstract
Despite advancements in ASR, child speech recognition remains challenging due to acoustic variability and limited annotated data. While fine-tuning adult ASR models on child speech is common, comparisons with flat-start training remain underexplored. We compare flat-start training across multiple datasets, SSL representations (WavLM, XEUS), and decoder architectures. Our results show that SSL representations are biased toward adult speech, with flat-start training on child speech mitigating these biases. We also analyze model scaling, finding consistent improvements up to 1B parameters, beyond which performance plateaus. Additionally, age-related ASR and speaker verification analysis highlights the limitations of proprietary models like Whisper, emphasizing the need for open-data models for reliable child speech research. All investigations are conducted using ESPnet, and our publicly available benchmark provides insights into training strategies for robust child speech processing.
Problem

Research questions and friction points this paper is trying to address.

Comparing flat-start training versus fine-tuning for child ASR
Analyzing SSL representations' adult bias and child data mitigation
Investigating model scaling limits and proprietary model limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flat-start training on child speech mitigates adult bias
Analyzing model scaling up to 1B parameters for optimization
Using ESPnet framework for open-data child speech benchmarking
🔎 Similar Papers
No similar papers found.