Prompting open-source and commercial language models for grammatical error correction of English learner text

📅 2024-01-15
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 11
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the zero-shot and few-shot performance of seven open-source and three commercial large language models (LLMs) on grammatical error correction (GEC), across four major benchmarks—including BEA-19 and CoNLL-14—and fine-grained error types. Methodologically, it introduces the first comparative analysis of multi-LLM behavior under two distinct correction paradigms: minimal-edit (preserving original wording) and fluency-oriented (prioritizing naturalness). Results show that open-source LLMs (e.g., Llama-3, Mistral) significantly outperform commercial counterparts (e.g., GPT-4) in minimal-edit tasks, with robust zero-shot performance rivaling few-shot settings on several benchmarks. In contrast, commercial LLMs marginally surpass supervised GEC models only in fluency improvement. These findings provide empirical evidence and practical guidance for selecting LLMs in GEC applications, highlighting paradigm-specific strengths and the viability of open-source models for precision-preserving correction.

Technology Category

Application Category

📝 Abstract
Thanks to recent advances in generative AI, we are able to prompt large language models (LLMs) to produce texts which are fluent and grammatical. In addition, it has been shown that we can elicit attempts at grammatical error correction (GEC) from LLMs when prompted with ungrammatical input sentences. We evaluate how well LLMs can perform at GEC by measuring their performance on established benchmark datasets. We go beyond previous studies, which only examined GPT* models on a selection of English GEC datasets, by evaluating seven open-source and three commercial LLMs on four established GEC benchmarks. We investigate model performance and report results against individual error types. Our results indicate that LLMs do not always outperform supervised English GEC models except in specific contexts -- namely commercial LLMs on benchmarks annotated with fluency corrections as opposed to minimal edits. We find that several open-source models outperform commercial ones on minimal edit benchmarks, and that in some settings zero-shot prompting is just as competitive as few-shot prompting.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' performance on grammatical error correction benchmarks
Comparing open-source and commercial LLMs in error correction tasks
Assessing zero-shot vs few-shot prompting for grammatical accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompting LLMs for grammatical error correction
Evaluating seven open-source and three commercial LLMs
Comparing zero-shot and few-shot prompting performance
🔎 Similar Papers
No similar papers found.