NLP Methods May Actually Be Better Than Professors at Estimating Question Difficulty

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Educators exhibit poor discriminative ability in assessing the difficulty of true/false questions in neural networks and machine learning, achieving only marginal performance (AUC ≈ 0.55). Method: We propose a novel paradigm for question difficulty prediction leveraging large language models’ (LLMs) intrinsic uncertainty—rather than relying on direct LLM-generated difficulty scores. Our approach extracts uncertainty indicators from the LLM’s reasoning process (e.g., confidence distribution, response consistency) and trains a few-shot supervised model using only 42 human-labeled samples. Contribution/Results: (1) Educator judgments show near-chance discrimination; (2) Uncertainty-based features significantly outperform both educator assessments (AUC = 0.82) and prompt-based direct difficulty estimation (ΔAUC = +0.11); (3) We provide the first empirical validation that LLM uncertainty serves as an effective, low-resource proxy for question difficulty—enabling more intelligent, adaptive assessment design.

Technology Category

Application Category

📝 Abstract
Estimating the difficulty of exam questions is essential for developing good exams, but professors are not always good at this task. We compare various Large Language Model-based methods with three professors in their ability to estimate what percentage of students will give correct answers on True/False exam questions in the areas of Neural Networks and Machine Learning. Our results show that the professors have limited ability to distinguish between easy and difficult questions and that they are outperformed by directly asking Gemini 2.5 to solve this task. Yet, we obtained even better results using uncertainties of the LLMs solving the questions in a supervised learning setting, using only 42 training samples. We conclude that supervised learning using LLM uncertainty can help professors better estimate the difficulty of exam questions, improving the quality of assessment.
Problem

Research questions and friction points this paper is trying to address.

Compare NLP methods and professors in estimating exam question difficulty
Evaluate LLM-based approaches for predicting student answer accuracy
Improve question difficulty assessment using supervised learning with LLM uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using Gemini 2.5 to estimate question difficulty
Supervised learning with LLM uncertainty
Training with only 42 samples