Evaluating Language Models' Evaluations of Games

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the capability of language and reasoning models to assess fairness and enjoyment in board games, benchmarking against human judgments and symbolic agents. We introduce a novel “computational complexity–quantified difficulty” two-dimensional evaluation framework, integrating game-theoretic analysis and resource-rational meta-reasoning modeling. Our benchmark comprises over 100 newly designed games and 450 human-annotated evaluations. Results show that reasoning models align more closely with human overall assessments than language models, yet exhibit significantly lower consistency in judging enjoyment versus fairness. Counterintuitively, human–model alignment degrades as models approach game-theoretic optimality, and computational resource consumption displays non-monotonic fluctuations. The core contribution is the first AI evaluation benchmark explicitly targeting game-value judgment, revealing a critical trade-off between efficiency and consistency stemming from limited meta-reasoning capacity in current models. This work provides both a theoretical framework and empirical foundation for developing trustworthy AI evaluation systems.

Technology Category

Application Category

📝 Abstract
Reasoning is not just about solving problems -- it is also about evaluating which problems are worth solving at all. Evaluations of artificial intelligence (AI) systems primarily focused on problem solving, historically by studying how models play games such as chess and Go. In this paper, we advocate for a new paradigm that assesses AI systems' evaluation of games. First, we introduce a formalism for evaluating such evaluations. We then leverage a large-scale dataset of over $100$ novel board games and over 450 human judgments to compare evaluations produced by modern language and reasoning models against those of people and symbolic computational agents. We consider two kinds of evaluative queries: assessing the payoff (or fairness) and the funness of games. These queries span two dimensions relevant to the design of evaluations of AI evaluations: how complex a query is to compute and how difficult a query is to quantify. Our results show that reasoning models are generally more aligned to people in their evaluations of games than non-reasoning language models. However, we observe a non-monotonic relationship: as models get closer to game-theoretic optimal, their fit to human data weakens. We also observe more "jaggedness" across models for assessing funness, in line with the greater difficulty of quantifying this query. Across queries and games, reasoning models show highly variable and unpredictable resource usage when assessing queries, pointing to the importance of imbuing more resource-rational meta-reasoning in language and reasoning models.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI systems' ability to assess game worthiness and fairness
Comparing language models' game evaluations against human judgment benchmarks
Analyzing reasoning models' alignment with human perceptions of funness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formalizing evaluation of AI game assessments
Comparing model evaluations with human judgments
Analyzing resource usage in reasoning models
🔎 Similar Papers
No similar papers found.