🤖 AI Summary
Existing Chinese Grammatical Error Correction (CGEC) research lacks a continual learning benchmark tailored to multidisciplinary academic writing, hindering robust adaptation to domain-specific linguistic variation and exacerbating catastrophic forgetting.
Method: We introduce CL²GEC—the first continual learning CGEC benchmark for Chinese literary academic writing—comprising 10 disciplines and 10,000 manually annotated sentences, simulating sequential discipline-wise learning. We propose the first evaluation framework for multidisciplinary continual CGEC, featuring discipline-ordered task sequences and anti-forgetting assessment protocols. Our approach integrates large-model sequential fine-tuning, parameter-efficient adaptation, and continual learning strategies (e.g., regularization, rehearsal), evaluated via dual metrics: grammatical correction accuracy and domain-knowledge retention.
Results: Regularization methods significantly outperform rehearsal and naive fine-tuning in mitigating forgetting and enabling cross-disciplinary generalization, establishing a new benchmark and effective technical pathway for academic writing assistance systems.
📝 Abstract
The growing demand for automated writing assistance in diverse academic domains highlights the need for robust Chinese Grammatical Error Correction (CGEC) systems that can adapt across disciplines. However, existing CGEC research largely lacks dedicated benchmarks for multi-disciplinary academic writing, overlooking continual learning (CL) as a promising solution to handle domain-specific linguistic variation and prevent catastrophic forgetting. To fill this crucial gap, we introduce CL$^2$GEC, the first Continual Learning benchmark for Chinese Literature Grammatical Error Correction, designed to evaluate adaptive CGEC across multiple academic fields. Our benchmark includes 10,000 human-annotated sentences spanning 10 disciplines, each exhibiting distinct linguistic styles and error patterns. CL$^2$GEC focuses on evaluating grammatical error correction in a continual learning setting, simulating sequential exposure to diverse academic disciplines to reflect real-world editorial dynamics. We evaluate large language models under sequential tuning, parameter-efficient adaptation, and four representative CL algorithms, using both standard GEC metrics and continual learning metrics adapted to task-level variation. Experimental results reveal that regularization-based methods mitigate forgetting more effectively than replay-based or naive sequential approaches. Our benchmark provides a rigorous foundation for future research in adaptive grammatical error correction across diverse academic domains.