π€ AI Summary
To address the low efficiency and insufficient accuracy of Word Error Rate (WER) estimation for automatic speech recognition (ASR) systems in label-free evaluation scenarios, this paper proposes an efficient unsupervised WER estimation algorithm. The method fuses self-supervised speech representations (wav2vec 2.0) and text representations (BERT) via mean pooling, then employs a lightweight regression model to directly predict WERβbypassing computationally expensive alignment procedures. Evaluated on the Ted-Lium3 benchmark, the approach achieves a 14.10% reduction in root-mean-square error (RMSE) and a 1.22 percentage-point improvement in Pearson correlation coefficient over prior methods, while attaining a real-time factor of 3.4Γ. This demonstrates a favorable trade-off between accuracy and inference speed. The proposed framework provides a scalable, fully unsupervised solution for large-scale ASR evaluation without ground-truth transcriptions.
π Abstract
Word error rate (WER) estimation aims to evaluate the quality of an automatic speech recognition (ASR) system's output without requiring ground-truth labels. This task has gained increasing attention as advanced ASR systems are trained on large amounts of data. In this context, the computational efficiency of a WER estimator becomes essential in practice. However, previous works have not prioritised this aspect. In this paper, a Fast estimator for WER (Fe-WER) is introduced, utilizing average pooling over self-supervised learning representations for speech and text. Our results demonstrate that Fe-WER outperformed a baseline relatively by 14.10% in root mean square error and 1.22% in Pearson correlation coefficient on Ted-Lium3. Moreover, a comparative analysis of the distributions of target WER and WER estimates was conducted, including an examination of the average values per speaker. Lastly, the inference speed was approximately 3.4 times faster in the real-time factor.