🤖 AI Summary
In CTC-based pronunciation error detection, forced alignment is highly sensitive to speech variability, leading to inaccurate goodness-of-pronunciation (GOP) estimation. To address this, we propose a phonology-knowledge-driven, alignment-free GOP modeling approach. Our method introduces phoneme clustering coupled with constraints derived from common second-language pronunciation errors, enabling replacement-aware constrained and unconstrained phoneme substitution strategies—thereby eliminating reliance on precise frame-level alignment. The proposed framework achieves high accuracy while significantly improving computational efficiency. On the MPC (child speech) and SpeechOcean762 (child and adult speech) benchmarks, it attains substantially higher RPS (Recall of Pronunciation Errors) scores than both baseline and UPS methods, with markedly reduced inference overhead. This makes it particularly suitable for real-time computer-assisted pronunciation training systems.
📝 Abstract
Computer-Assisted Pronunciation Training (CAPT) systems employ automatic measures of pronunciation quality, such as the goodness of pronunciation (GOP) metric. GOP relies on forced alignments, which are prone to labeling and segmentation errors due to acoustic variability. While alignment-free methods address these challenges, they are computationally expensive and scale poorly with phoneme sequence length and inventory size. To enhance efficiency, we introduce a substitution-aware alignment-free GOP that restricts phoneme substitutions based on phoneme clusters and common learner errors. We evaluated our GOP on two L2 English speech datasets, one with child speech, My Pronunciation Coach (MPC), and SpeechOcean762, which includes child and adult speech. We compared RPS (restricted phoneme substitutions) and UPS (unrestricted phoneme substitutions) setups within alignment-free methods, which outperformed the baseline. We discuss our results and outline avenues for future research.