🤖 AI Summary
This work addresses the challenge of dynamically selecting the most informative training samples for large language models in data-scarce and costly regimes. Inspired by Vygotsky’s educational theory of the Zone of Proximal Development (ZPD), we introduce ZPD into large model training and propose a dynamic ability–difficulty matching mechanism grounded in Item Response Theory (IRT). Our approach estimates the model’s current proficiency via IRT, calibrates sample difficulty, and computes adaptive matching scores to identify high-information samples tailored to the model’s evolving capabilities. This method overcomes the limitations of static data selection strategies, substantially improving data efficiency and enabling more effective model training under constrained data budgets.
📝 Abstract
As the cost of training large language models continues to increase and high-quality training data become increasingly scarce, selecting high-value samples or synthesizing effective training data under limited data budgets has emerged as a critical research problem. Most existing data selection methods rely on static criteria, such as difficulty, uncertainty, or heuristics, and fail to model the evolving relationship between the model and the data. Inspired by the educational theory of the Zone of Proximal Development (ZPD), we propose ZPD Detector, a data selection framework that adopts a bidirectional perspective between models and data by explicitly modeling the alignment between sample difficulty and the model's current capability. ZPD Detector integrates difficulty calibration, model capability estimation based on Item Response Theory (IRT), and a capability-difficulty matching score to dynamically identify the most informative samples at each learning stage, improving data utilization efficiency; moreover, this dynamic matching strategy provides new insights into training strategy design. All code and data will be released after our work be accepted to support reproducible researc