🤖 AI Summary
This study addresses the low efficiency and high cost of manual alignment between educational assessment item banks and curriculum standards. We propose an LLM-driven hybrid alignment framework that integrates large language models (e.g., GPT-4o-mini) with a candidate skill pre-screening strategy, structured into three sequential tasks: misalignment detection, standard skill matching, and semantic ranking-based filtering—while retaining human review for ambiguous cases. Evaluated on over 12,000 item–skill pairs across K–5 mathematics and reading domains, GPT-4o-mini achieves 83%–94% accuracy in alignment status identification; pre-screening places the correct skill within the top-five recommendations for >95% of items, with particularly strong performance in mathematics. The framework substantially reduces human review burden while preserving scalability, interpretability, and pedagogical reliability—offering a robust, extensible technical pathway for continuous instructional alignment validation.
📝 Abstract
As educational systems evolve, ensuring that assessment items remain aligned with content standards is essential for maintaining fairness and instructional relevance. Traditional human alignment reviews are accurate but slow and labor-intensive, especially across large item banks. This study examines whether Large Language Models (LLMs) can accelerate this process without sacrificing accuracy. Using over 12,000 item-skill pairs in grades K-5, we tested three LLMs (GPT-3.5 Turbo, GPT-4o-mini, and GPT-4o) across three tasks that mirror real-world challenges: identifying misaligned items, selecting the correct skill from the full set of standards, and narrowing candidate lists prior to classification. In Study 1, GPT-4o-mini correctly identified alignment status in approximately 83-94% of cases, including subtle misalignments. In Study 2, performance remained strong in mathematics but was lower for reading, where standards are more semantically overlapping. Study 3 demonstrated that pre-filtering candidate skills substantially improved results, with the correct skill appearing among the top five suggestions more than 95% of the time. These findings suggest that LLMs, particularly when paired with candidate filtering strategies, can significantly reduce the manual burden of item review while preserving alignment accuracy. We recommend the development of hybrid pipelines that combine LLM-based screening with human review in ambiguous cases, offering a scalable solution for ongoing item validation and instructional alignment.