๐ค AI Summary
To address the high annotation cost and low data utilization in automatic speech recognition (ASR), this paper proposes a two-stage active learning framework. In the first stage, diverse initial samples are selected via unsupervised clustering of x-vectors. In the second stage, a batch-wise sampling strategy is designed by jointly leveraging x-vector representations and Bayesian uncertainty estimated via Monte Carlo Dropout, enabling efficient selection of the most informative unlabeled utterances. This work is the first to synergistically integrate unsupervised speech representations with ASR-specific Bayesian active learning, significantly improving annotation efficiency and model generalization. Experiments demonstrate that the method surpasses existing state-of-the-art approaches on homogeneous, heterogeneous, and out-of-distribution test sets using only 30%โ50% of the annotated data, confirming its strong robustness and practical utility.
๐ Abstract
This paper introduces a novel two-stage active learning (AL) pipeline for automatic speech recognition (ASR), combining unsupervised and supervised AL methods. The first stage utilizes unsupervised AL by using x-vectors clustering for diverse sample selection from unlabeled speech data, thus establishing a robust initial dataset for the subsequent supervised AL. The second stage incorporates a supervised AL strategy, with a batch AL method specifically developed for ASR, aimed at selecting diverse and informative batches of samples. Here, sample diversity is also achieved using x-vectors clustering, while the most informative samples are identified using a Bayesian AL method tailored for ASR with an adaptation of Monte Carlo dropout to approximate Bayesian inference. This approach enables precise uncertainty estimation, thereby enhancing ASR model training with significantly reduced data requirements. Our method has shown superior performance compared to competing methods on homogeneous, heterogeneous, and OOD test sets, demonstrating that strategic sample selection and innovative Bayesian modeling can substantially optimize both labeling effort and data utilization in deep learning-based ASR applications.