🤖 AI Summary
Large language models (LLMs) such as the o1 series exhibit “overthinking”—excessive computational effort on simple tasks—leading to wasteful resource consumption.
Method: This paper proposes an evaluation and optimization framework that jointly prioritizes output accuracy and inference efficiency. It systematically characterizes the overthinking phenomenon for the first time and introduces a dual-perspective efficiency metric integrating output quality and computational cost. The framework employs a self-training paradigm requiring no human annotations, combining dynamic chain-of-thought (CoT) pruning with resource-aware inference scheduling.
Contribution/Results: Evaluated across diverse benchmarks—including GSM8K, MATH500, GPQA, and AIME—the method maintains original accuracy while significantly reducing inference FLOPs and latency. It demonstrates strong cross-task generalization, offering a novel pathway toward efficient, scalable LLM inference.
📝 Abstract
The remarkable performance of models like the OpenAI o1 can be attributed to their ability to emulate human-like long-time thinking during inference. These models employ extended chain-of-thought (CoT) processes, exploring multiple strategies to enhance problem-solving capabilities. However, a critical question remains: How to intelligently and efficiently scale computational resources during testing. This paper presents the first comprehensive study on the prevalent issue of overthinking in these models, where excessive computational resources are allocated for simple problems with minimal benefit. We introduce novel efficiency metrics from both outcome and process perspectives to evaluate the rational use of computational resources by o1-like models. Using a self-training paradigm, we propose strategies to mitigate overthinking, streamlining reasoning processes without compromising accuracy. Experimental results show that our approach successfully reduces computational overhead while preserving model performance across a range of testsets with varying difficulty levels, such as GSM8K, MATH500, GPQA, and AIME.