🤖 AI Summary
This study addresses the limitations of existing automated systems in sentence-level second-language English pronunciation assessment, which often fail to simultaneously capture fluency, prosody, and completeness while providing accurate and fair personalized feedback. It presents the first systematic exploration of the zero-shot speech-capable large language model Qwen2-Audio-7B-Instruct for multidimensional pronunciation scoring—encompassing accuracy, fluency, prosody, and completeness—using 5,000 utterances from the Speechocean762 dataset. Evaluated in an end-to-end setting without fine-tuning, the model demonstrates strong alignment with human raters for high-scoring segments (within ±2 tolerance), though it exhibits overestimation and insufficient error detection for low-scoring segments. These findings establish a novel paradigm and suggest concrete directions for improving computer-assisted pronunciation training systems.
📝 Abstract
An accurate assessment of L2 English pronunciation is crucial for language learning, as it provides personalized feedback and ensures a fair evaluation of individual progress. However, automated scoring remains challenging due to the complexity of sentence-level fluency, prosody, and completeness. This paper evaluates the zero-shot performance of Qwen2-Audio-7B-Instruct, an instruction-tuned speech-LLM, on 5,000 Speechocean762 utterances. The model generates rubric-aligned scores for accuracy, fluency, prosody, and completeness, showing strong agreement with human ratings within +-2 tolerance, especially for high-quality speech. However, it tends to overpredict low-quality speech scores and lacks precision in error detection. These findings demonstrate the strong potential of speech LLMs in scalable pronunciation assessment and suggest future improvements through enhanced prompting, calibration, and phonetic integration to advance Computer-Assisted Pronunciation Training.