🤖 AI Summary
Large language models (LLMs) suffer from inefficient inference—fixed computational budgets poorly align with varying task complexity, causing over-computation on simple tasks and under-computation on complex ones.
Method: This paper presents a systematic survey of adaptive and controllable test-time computation (TTC) strategies, proposing a two-level taxonomy: L1 (controlled inference under fixed budget) and L2 (dynamic resource allocation). It innovatively integrates dynamic scaling, confidence-guided early exiting, and hybrid inference modes to jointly optimize token efficiency and performance.
Contribution/Results: Empirical evaluation across multiple benchmarks on mainstream closed-source LLMs establishes the first quantitative characterization of the trade-off between inference efficacy and computational cost. The work delivers both a theoretical framework and an empirical benchmark for efficient, user-constrained, and resource-adaptive LLM inference—emphasizing practicality, scalability, and responsiveness to user-specified constraints.
📝 Abstract
Large language models (LLMs) have rapidly progressed into general-purpose agents capable of solving a broad spectrum of tasks. However, current models remain inefficient at reasoning: they apply fixed inference-time compute regardless of task complexity, often overthinking simple problems while underthinking hard ones. This survey presents a comprehensive review of efficient test-time compute (TTC) strategies, which aim to improve the computational efficiency of LLM reasoning. We introduce a two-tiered taxonomy that distinguishes between L1-controllability, methods that operate under fixed compute budgets, and L2-adaptiveness, methods that dynamically scale inference based on input difficulty or model confidence. We benchmark leading proprietary LLMs across diverse datasets, highlighting critical trade-offs between reasoning performance and token usage. Compared to prior surveys on efficient reasoning, our review emphasizes the practical control, adaptability, and scalability of TTC methods. Finally, we discuss emerging trends such as hybrid thinking models and identify key challenges for future work towards making LLMs more computationally efficient, robust, and responsive to user constraints.