🤖 AI Summary
This work addresses the computational inefficiency of existing test-time search methods, which treat reasoning trajectories as one-off samples and neglect intermediate insights, leading to redundant computation. To overcome this limitation, the authors propose Recycling Search Experience (RSE), a training-free, self-guided approach that enables cumulative exploration of test-time search experience. RSE explicitly reuses successful reasoning paths (positive guidance) and prunes recurring failure patterns (negative guidance) by maintaining a shared experience repository. This mechanism breaks the memoryless bottleneck inherent in conventional search strategies. Evaluated on benchmarks including HMMT24, HMMT25, IMO-Bench, and HLE, RSE achieves state-of-the-art test-time scaling efficiency, significantly outperforming strong baselines under identical computational budgets.
📝 Abstract
Test-Time Scaling enhances the reasoning capabilities of Large Language Models by allocating additional inference compute to broaden the exploration of the solution space. However, existing search strategies typically treat rollouts as disposable samples, where valuable intermediate insights are effectively discarded after each trial. This systemic memorylessness leads to massive computational redundancy, as models repeatedly re-derive discovered conclusions and revisit known dead ends across extensive attempts. To bridge this gap, we propose \textbf{Recycling Search Experience (RSE)}, a self-guided, training-free strategy that turns test-time search from a series of isolated trials into a cumulative process. By actively distilling raw trajectories into a shared experience bank, RSE enables positive recycling of intermediate conclusions to shortcut redundant derivations and negative recycling of failure patterns to prune encountered dead ends. Theoretically, we provide an analysis that formalizes the efficiency gains of RSE, validating its advantage over independent sampling in solving complex reasoning tasks. Empirically, extensive experiments on HMMT24, HMMT25, IMO-Bench, and HLE show that RSE consistently outperforms strong baselines with comparable computational cost, achieving state-of-the-art scaling efficiency.