🤖 AI Summary
This study addresses the challenges in automated scoring of constructed-response items in PISA assessments, where human scoring is susceptible to linguistic variation and rater bias, while existing automated approaches are hindered by the scarcity of domain-specific training data. To overcome this limitation, the authors propose a method that leverages only a small amount of confidential reference data, combining rule-based text transformations with prompt engineering to generate high-quality, contextually relevant synthetic training data. This approach enhances data utility while preserving privacy. Three synthetic datasets were constructed, exhibiting surface-level characteristics closely aligned with the original data. Preliminary experiments demonstrate that one of the derived formats significantly improves the performance of automated scoring models during training.
📝 Abstract
Every 4 years, the PISA test is administered by the OECD to test the knowledge of teenage students worldwide and allow for comparisons of educational systems. However, having to avoid language differences and annotator bias makes the grading of student answers challenging. For these reasons, it would be interesting to compare methods of automatic student answer grading. To train some of these methods, which require machine learning, or to compute parameters or select hyperparameters for those that do not, a large amount of domain-specific data is needed. In this work, we explore a small number of methods for creating a large-scale training dataset using only a relatively small confidential dataset as a reference, leveraging a set of very simple derived text formats to preserve confidentiality. Using these methods, we successfully created three surrogate datasets that are, at the very least, superficially more similar to the reference dataset than purely the result of prompt-based generation. Early experiments suggest one of these approaches might also lead to improved model training.