🤖 AI Summary
This study addresses the limitations of existing AI reasoning benchmarks in evaluating the alignment between large language models (LLMs) and human-like reasoning in naturalistic settings. To bridge this gap, the authors introduce a novel benchmark grounded in the narrative structure of the *Watson & Holmes* detective board game, which presents evidence incrementally, poses open-ended questions, and accepts free-form textual responses. This framework uniquely operationalizes naturalistic reasoning into a quantifiable evaluation paradigm, supported by an automated scoring system enabling scalable assessment. Experiments over a nine-month period in 2025 reveal that LLMs, enhanced through reasoning-oriented architectural refinements, improved from the 25th to the 95th percentile relative to human performance, with model iteration and architectural changes each contributing approximately half of the gains. While models exhibit inductive advantages on short evidence sequences, their performance markedly declines on longer cases (1,900–4,000 words).
📝 Abstract
Existing benchmarks for AI reasoning provide limited insight into how closely these capabilities resemble human reasoning in naturalistic contexts. We present an adaptation of the Watson & Holmes detective tabletop game as a new benchmark designed to evaluate reasoning performance using incrementally presented narrative evidence, open-ended questions and unconstrained language responses. An automated grading system was developed and validated against human assessors to enable scalable and replicable performance evaluation. Results show a clear improvement in AI model performance over time. Over nine months of 2025, model performance rose from the lower quartile of the human comparison group to approximately the top 5%. Around half of this improvement reflects steady advancement across successive model releases, while the remainder corresponds to a marked step change associated with reasoning-oriented model architectures. Systematic differences in the performance of AI models compared to humans, dependent on features of the specific detection puzzle, were mostly absent with the exception of a fall in performance for models when solving longer cases (case lengths being in the range of 1900-4000 words), and an advantage at inductive reasoning for reasoning models at early stages of case solving when evidence was scant.