🤖 AI Summary
This study investigates whether large language models (LLMs) can reliably estimate the difficulty of educational test items—a critical concern for high-stakes assessment safety. Using 1,031 real items from Brazil’s ENEM exam, we present the first systematic evaluation of ten prominent LLMs across three dimensions: absolute calibration, ranking fidelity, and sensitivity to learner background, benchmarking against item response theory (IRT) parameters as the gold standard. Through prompt engineering and multidimensional metrics, we compare open- and closed-source models. Results reveal that even the best-performing model achieves only moderate ranking correlation, consistently underestimates item difficulty, exhibits markedly degraded performance on multimodal items, and responds inconsistently to demographic prompts. These findings highlight fundamental limitations of LLMs in context-aware and personalized assessment, leading us to propose a “assess-before-generate” paradigm for responsible test design.
📝 Abstract
As Large Language Models (LLMs) are increasingly deployed to generate educational content, a critical safety question arises: can these models reliably estimate the difficulty of the questions they produce? Using Brazil's high-stakes ENEM exam as a testbed, we benchmark ten proprietary and open-weight LLMs against official Item Response Theory (IRT) parameters for 1,031 questions. We evaluate performance along three axes: absolute calibration, rank fidelity, and context sensitivity across learner backgrounds. Our results reveal a significant trade-off: while the best models achieve moderate rank correlation, they systematically underestimate difficulty and degrade significantly on multimodal items. Crucially, we find that models exhibit limited and inconsistent plasticity when prompted with student demographic cues, suggesting they are not yet ready for context-adaptive personalization. We conclude that LLMs function best as calibrated screeners rather than authoritative oracles, supporting an"evaluation-before-generation"pipeline for responsible assessment design.