🤖 AI Summary
Large reasoning models (LRMs) lack standardized benchmarks for evaluating System 1–style intuitive reasoning—fast, automatic, and non-sequential cognition. Method: We introduce S1-Bench, the first lightweight, multilingual, interdisciplinary benchmark for intuitive reasoning, grounded in dual-process cognitive theory. It employs simple question-answering tasks requiring no chain-of-thought reasoning and proposes novel metrics: first-response accuracy, response length ratio, and error accumulation rate. Contribution/Results: S1-Bench enables the first systematic quantification of intuitive reasoning in LRMs, revealing pervasive over-reasoning, excessively verbose outputs, and the “early-correct–late-wrong” phenomenon. Empirical evaluation across 22 state-of-the-art LRMs shows average response lengths 15.5× those of small models; over 60% of models produce correct initial answers yet persist in futile inference; and error rates rise significantly with generation step count—highlighting a critical bottleneck in System 1–System 2 synergy.
📝 Abstract
We introduce S1-Bench, a novel benchmark designed to evaluate Large Reasoning Models' (LRMs) performance on simple tasks that favor intuitive system 1 thinking rather than deliberative system 2 reasoning. While LRMs have achieved significant breakthroughs in complex reasoning tasks through explicit chains of thought, their reliance on deep analytical thinking may limit their system 1 thinking capabilities. Moreover, a lack of benchmark currently exists to evaluate LRMs' performance in tasks that require such capabilities. To fill this gap, S1-Bench presents a set of simple, diverse, and naturally clear questions across multiple domains and languages, specifically designed to assess LRMs' performance in such tasks. Our comprehensive evaluation of 22 LRMs reveals significant lower efficiency tendencies, with outputs averaging 15.5 times longer than those of traditional small LLMs. Additionally, LRMs often identify correct answers early but continue unnecessary deliberation, with some models even producing numerous errors. These findings highlight the rigid reasoning patterns of current LRMs and underscore the substantial development needed to achieve balanced dual-system thinking capabilities that can adapt appropriately to task complexity.