🤖 AI Summary
Large language models (LLMs) suffer from a dual deficiency—“overthinking” (redundant computation on simple tasks) and “underthinking” (insufficient reasoning on complex tasks)—lacking a unified framework to adaptively balance efficiency and reasoning performance. Method: We introduce OptimalThinkingBench, the first dual-mode benchmark comprising 72 simple domain-specific queries and 11 complex reasoning task categories, along with the novel metric “Thought Adjustment Accuracy” and a synergistic evaluation framework integrating OverthinkingBench and UnderthinkingBench. Contribution/Results: Comprehensive evaluation of 33 models reveals pervasive overthinking or underthinking behaviors; surprisingly, large non-chain-of-thought models underperform smaller chain-of-thought models on complex tasks; and existing optimization techniques fail to jointly improve both capabilities. This work establishes the first principled benchmark, metric, and empirical foundation for developing “optimal-thinking” models—achieving both high efficiency and robust reasoning.
📝 Abstract
Thinking LLMs solve complex tasks at the expense of increased compute and overthinking on simpler problems, while non-thinking LLMs are faster and cheaper but underthink on harder reasoning problems. This has led to the development of separate thinking and non-thinking LLM variants, leaving the onus of selecting the optimal model for each query on the end user. In this work, we introduce OptimalThinkingBench, a unified benchmark that jointly evaluates overthinking and underthinking in LLMs and also encourages the development of optimally-thinking models that balance performance and efficiency. Our benchmark comprises two sub-benchmarks: OverthinkingBench, featuring simple queries in 72 domains, and UnderthinkingBench, containing 11 challenging reasoning tasks. Using novel thinking-adjusted accuracy metrics, we perform extensive evaluation of 33 different thinking and non-thinking models and show that no model is able to optimally think on our benchmark. Thinking models often overthink for hundreds of tokens on the simplest user queries without improving performance. In contrast, large non-thinking models underthink, often falling short of much smaller thinking models. We further explore several methods to encourage optimal thinking, but find that these approaches often improve on one sub-benchmark at the expense of the other, highlighting the need for better unified and optimal models in the future.