🤖 AI Summary
Test-time scaling (TTS) with a single temperature constrains the reasoning potential of large language models (LLMs), as uniform temperature settings fail to activate complementary reasoning capabilities across problem difficulty spectra; conventional single-temperature, multi-sample strategies only explore local regions of the solution space. Method: We propose “Temperature-Dimensional Scaling” (TDS), a novel TTS paradigm that employs parallel sampling across multiple temperatures and integrates outputs via temperature-weighted voting to systematically unlock latent reasoning capacities. Contribution/Results: Extensive experiments on the Qwen3 family of models and five major reasoning benchmarks demonstrate that TDS achieves an average accuracy improvement of 7.3 percentage points. It significantly extends the inherent reasoning boundaries of base LLMs without requiring reinforcement learning fine-tuning, effectively approaching their upper-bound performance limits.
📝 Abstract
Large language models (LLMs) can improve reasoning at inference time through test-time scaling (TTS), where multiple reasoning traces are generated and the best one is selected. Prior work shows that increasing the number of samples K steadily improves accuracy. In this paper, we demonstrate that this trend does not hold indefinitely: at large K, further scaling yields no gains, and certain hard questions remain unsolved regardless of the number of traces. Interestingly, we find that different sampling temperatures solve different subsets of problems, implying that single-temperature scaling explores only part of a model's potential. We therefore propose scaling along the temperature dimension, which enlarges the reasoning boundary of LLMs. Averaged over Qwen3 (0.6B, 1.7B, 4B, 8B) and five representative reasoning benchmarks (AIME 2024/2025, MATH500, LiveCodeBench, Hi-ToM), temperature scaling yields an additional 7.3 points over single-temperature TTS. Temperature scaling also enables base models to reach performance comparable to reinforcement learning (RL)-trained counterparts, without additional post-training. We further provide a comprehensive analysis of this phenomenon and design a multi-temperature voting method that reduces the overhead of temperature scaling. Overall, our findings suggest that TTS is more powerful than previously thought, and that temperature scaling offers a simple and effective way to unlock the latent potential of base models.