🤖 AI Summary
This study addresses the subjectivity challenge in open-ended response assessment within serious games. We systematically evaluate five lightweight, localized large language models (LLMs) for their reliability in adjudicating player decision responses in the energy-community simulation game *En-join*. We propose the first meta-evaluation framework tailored to small, localized models, quantifying sensitivity (accuracy), specificity (true positive rate), and consistency (true negative rate). Experiments are conducted across multiple scenarios using authentic in-game interaction data. Results reveal a pronounced trade-off among small models of differing architectures—particularly between correct-response identification and false-positive control—and empirically validate the critical role of context-aware modeling in enhancing assessment robustness. The work provides reproducible model selection guidelines and a credibility benchmark for AI-driven educational assessment tools.
📝 Abstract
The evaluation of open-ended responses in serious games presents a unique challenge, as correctness is often subjective. Large Language Models (LLMs) are increasingly being explored as evaluators in such contexts, yet their accuracy and consistency remain uncertain, particularly for smaller models intended for local execution. This study investigates the reliability of five small-scale LLMs when assessing player responses in extit{En-join}, a game that simulates decision-making within energy communities. By leveraging traditional binary classification metrics (including accuracy, true positive rate, and true negative rate), we systematically compare these models across different evaluation scenarios. Our results highlight the strengths and limitations of each model, revealing trade-offs between sensitivity, specificity, and overall performance. We demonstrate that while some models excel at identifying correct responses, others struggle with false positives or inconsistent evaluations. The findings highlight the need for context-aware evaluation frameworks and careful model selection when deploying LLMs as evaluators. This work contributes to the broader discourse on the trustworthiness of AI-driven assessment tools, offering insights into how different LLM architectures handle subjective evaluation tasks.