The NazoNazo Benchmark: A Cost-Effective and Extensible Test of Insight-Based Reasoning in LLMs

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM evaluation suffers from benchmark saturation and data contamination, undermining validity and reliability. Method: We introduce InsightBench—a low-cost, scalable benchmark for insight reasoning, grounded in Japanese children’s puzzles—designed to assess creative, domain-knowledge-free reasoning. It features a blind-test set generation paradigm enabling dynamic updates and contamination-resistant evaluation. We further employ human comparative experiments, extension-item testing, and chain-of-thought candidate tracing to uncover prevalent metacognitive deficits—specifically, systematic verification failures—in LLMs. Results: Evaluated on 38 state-of-the-art models, none achieves human-level performance (52.9% accuracy); GPT-5 performs closest. Domain-specific reasoning models significantly outperform general-purpose ones, yet parameter count shows no significant correlation with accuracy. This work establishes a novel paradigm and empirical foundation for evaluating and calibrating LLM insight.

Technology Category

Application Category

📝 Abstract
Benchmark saturation and contamination undermine confidence in LLM evaluation. We present Nazonazo, a cost-effective and extensible benchmark built from Japanese children's riddles to test insight-based reasoning. Items are short (mostly one sentence), require no specialized domain knowledge, and can be generated at scale, enabling rapid refresh of blind sets when leakage is suspected. We evaluate 38 frontier models and 126 adults on 120 riddles. No model except for GPT-5 is comparable to human performance, which achieves a 52.9% mean accuracy. Model comparison on extended 201 items shows that reasoning models significantly outperform non-reasoning peers, while model size shows no reliable association with accuracy. Beyond aggregate accuracy, an informal candidate-tracking analysis of thought logs reveals many cases of verification failure: models often produce the correct solution among intermediate candidates yet fail to select it as the final answer, which we illustrate with representative examples observed in multiple models. Nazonazo thus offers a cost-effective, scalable, and easily renewable benchmark format that addresses the current evaluation crisis while also suggesting a recurrent meta-cognitive weakness, providing clear targets for future control and calibration methods.
Problem

Research questions and friction points this paper is trying to address.

Evaluating insight-based reasoning in large language models
Addressing benchmark saturation and contamination issues
Testing models' meta-cognitive verification capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Japanese children's riddles benchmark
Scalable insight-based reasoning evaluation
Candidate-tracking analysis reveals verification failures
🔎 Similar Papers
No similar papers found.
M
Masaharu Mizumoto
School of Knowledge Science, Japan Advanced Institute of Science and Technology
Dat Nguyen
Dat Nguyen
Postdoc - Harvard, Basis Institute
Graph Neural NetworkProgram AnalysisSoftware EngineeringProgram SynthesisComputer Vision
Z
Zhiheng Han
School of Knowledge Science, Japan Advanced Institute of Science and Technology
J
Jiyuan Fang
School of Knowledge Science, Japan Advanced Institute of Science and Technology
H
Heyuan Guan
School of Knowledge Science, Japan Advanced Institute of Science and Technology
X
Xingfu Li
School of Knowledge Science, Japan Advanced Institute of Science and Technology
N
Naoya Shiraishi
School of Knowledge Science, Japan Advanced Institute of Science and Technology
X
Xuyang Tian
School of Knowledge Science, Japan Advanced Institute of Science and Technology
Y
Yo Nakawake
School of Knowledge Science, Japan Advanced Institute of Science and Technology
L
Le Minh Nguyen
School of Information Science, Japan Advanced Institute of Science and Technology