Scaling and Prompting for Improved End-to-End Spoken Grammatical Error Correction

πŸ“… 2025-05-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
End-to-end spoken grammar error correction and feedback generation (SGECF) suffers from performance bottlenecks due to severe scarcity of annotated data. Method: We propose a joint optimization framework integrating pseudo-labeling and instruction-based prompting: (i) Whisper is employed for end-to-end speech-to-text mapping; (ii) pseudo-labeling expands the training set to 2500 hours; (iii) fluent transcriptions serve as contextual prompts to guide feedback generation and fine-tuning. Contributions/Results: This work is the first to synergistically combine pseudo-labeling and instruction prompting for end-to-end SGECF. It empirically uncovers non-monotonic interactions among model scale, data expansion, and prompting strategies. Crucially, it demonstrates that instruction prompting consistently benefits large language models, whereas pseudo-labeling gains diminish with increasing model size. Experiments show substantial improvements in grammatical error correction accuracy and feedback quality, establishing a novel paradigm for low-resource spoken language understanding.

Technology Category

Application Category

πŸ“ Abstract
Spoken Grammatical Error Correction (SGEC) and Feedback (SGECF) are crucial for second language learners, teachers and test takers. Traditional SGEC systems rely on a cascaded pipeline consisting of an ASR, a module for disfluency detection (DD) and removal and one for GEC. With the rise of end-to-end (E2E) speech foundation models, we investigate their effectiveness in SGEC and feedback generation. This work introduces a pseudo-labelling process to address the challenge of limited labelled data, expanding the training data size from 77 hours to approximately 2500 hours, leading to improved performance. Additionally, we prompt an E2E Whisper-based SGEC model with fluent transcriptions, showing a slight improvement in SGEC performance, with more significant gains in feedback generation. Finally, we assess the impact of increasing model size, revealing that while pseudo-labelled data does not yield performance gain for a larger Whisper model, training with prompts proves beneficial.
Problem

Research questions and friction points this paper is trying to address.

Improving spoken grammatical error correction for language learners
Addressing limited labeled data via pseudo-labeling expansion
Evaluating prompting and scaling effects on model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pseudo-labelling expands training data significantly
Prompting Whisper model improves feedback generation
Larger model benefits from prompts not pseudo-labels
πŸ”Ž Similar Papers
No similar papers found.