🤖 AI Summary
This study addresses the challenge of constructing high-quality training data for text classification under cost-performance trade-offs. We propose a human-AI collaborative framework that integrates synthetically generated data from GPT-4o with a limited set of human-annotated examples. Generation quality is controlled via temperature-scaled sampling, and knowledge distillation transfers the mixed dataset to a lightweight BERT model for open-ended response assessment. Our key contributions include: (1) the first systematic validation that an 80% synthetic–20% human data split achieves optimal performance; (2) empirical discovery of a nonlinear relationship between LLM generation temperature and model generalization—excessively low temperatures induce overfitting, while excessively high ones introduce noise; and (3) a 7.2% absolute accuracy gain over the fully human-annotated baseline, while maintaining high content validity. The approach establishes a reproducible, low-cost, and trustworthy paradigm for text classification data curation.
📝 Abstract
Large Language Models (LLMs) like GPT-4o can help automate text classification tasks at low cost and scale. However, there are major concerns about the validity and reliability of LLM outputs. By contrast, human coding is generally more reliable but expensive to procure at scale. In this study, we propose a hybrid solution to leverage the strengths of both. We combine human-coded data and synthetic LLM-produced data to fine-tune a classical machine learning classifier, distilling both into a smaller BERT model. We evaluate our method on a human-coded test set as a validity measure for LLM output quality. In three experiments, we systematically vary LLM-generated samples' size, variety, and consistency, informed by best practices in LLM tuning. Our findings indicate that augmenting datasets with synthetic samples improves classifier performance, with optimal results achieved at an 80% synthetic to 20% human-coded data ratio. Lower temperature settings of 0.3, corresponding to less variability in LLM generations, produced more stable improvements but also limited model learning from augmented samples. In contrast, higher temperature settings (0.7 and above) introduced greater variability in performance estimates and, at times, lower performance. Hence, LLMs may produce more uniform output that classifiers overfit to earlier or produce more diverse output that runs the risk of deteriorating model performance through information irrelevant to the prediction task. Filtering out inconsistent synthetic samples did not enhance performance. We conclude that integrating human and LLM-generated data to improve text classification models in assessment offers a scalable solution that leverages both the accuracy of human coding and the variety of LLM outputs.