🤖 AI Summary
Arabic automatic essay scoring (AES) is hindered by the scarcity of high-quality annotated data. To address this, we propose the first framework integrating large language model (LLM)-driven synthetic data generation with fine-grained, controllable error injection. Leveraging CEFR proficiency levels, we generate 3,040 Arabic essays annotated with multiple, linguistically grounded error types—constituting the first large-scale, fine-grained error-annotated Arabic AES dataset. We then fine-tune Standard Arabic BERT to develop a BERT-based regression scoring model. Experiments demonstrate significant performance gains over baselines across multiple metrics, enabling real-time, interpretable, and scalable scoring and feedback. Our core contributions are: (1) LLM-powered construction of high-fidelity synthetic training data, and (2) pedagogically informed, controllable modeling of linguistic errors aligned with second-language acquisition principles.
📝 Abstract
Automated Essay Scoring (AES) plays a crucial role in assessing language learners' writing quality, reducing grading workload, and providing real-time feedback. Arabic AES systems are particularly challenged by the lack of annotated essay datasets. This paper presents a novel framework leveraging Large Language Models (LLMs) and Transformers to generate synthetic Arabic essay datasets for AES. We prompt an LLM to generate essays across CEFR proficiency levels and introduce controlled error injection using a fine-tuned Standard Arabic BERT model for error type prediction. Our approach produces realistic human-like essays, contributing a dataset of 3,040 annotated essays. Additionally, we develop a BERT-based auto-marking system for accurate and scalable Arabic essay evaluation. Experimental results demonstrate the effectiveness of our framework in improving Arabic AES performance.