🤖 AI Summary
Current LLM evaluation suffers from benchmark saturation and data contamination, undermining validity and reliability. Method: We introduce InsightBench—a low-cost, scalable benchmark for insight reasoning, grounded in Japanese children’s puzzles—designed to assess creative, domain-knowledge-free reasoning. It features a blind-test set generation paradigm enabling dynamic updates and contamination-resistant evaluation. We further employ human comparative experiments, extension-item testing, and chain-of-thought candidate tracing to uncover prevalent metacognitive deficits—specifically, systematic verification failures—in LLMs. Results: Evaluated on 38 state-of-the-art models, none achieves human-level performance (52.9% accuracy); GPT-5 performs closest. Domain-specific reasoning models significantly outperform general-purpose ones, yet parameter count shows no significant correlation with accuracy. This work establishes a novel paradigm and empirical foundation for evaluating and calibrating LLM insight.
📝 Abstract
Benchmark saturation and contamination undermine confidence in LLM evaluation. We present Nazonazo, a cost-effective and extensible benchmark built from Japanese children's riddles to test insight-based reasoning. Items are short (mostly one sentence), require no specialized domain knowledge, and can be generated at scale, enabling rapid refresh of blind sets when leakage is suspected. We evaluate 38 frontier models and 126 adults on 120 riddles. No model except for GPT-5 is comparable to human performance, which achieves a 52.9% mean accuracy. Model comparison on extended 201 items shows that reasoning models significantly outperform non-reasoning peers, while model size shows no reliable association with accuracy. Beyond aggregate accuracy, an informal candidate-tracking analysis of thought logs reveals many cases of verification failure: models often produce the correct solution among intermediate candidates yet fail to select it as the final answer, which we illustrate with representative examples observed in multiple models. Nazonazo thus offers a cost-effective, scalable, and easily renewable benchmark format that addresses the current evaluation crisis while also suggesting a recurrent meta-cognitive weakness, providing clear targets for future control and calibration methods.