Policy of Thoughts: Scaling LLM Reasoning via Test-time Policy Evolution

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of large language models in complex, long-horizon reasoning tasks under the strategy freezing assumption, which prevents them from leveraging execution feedback to continuously refine their reasoning strategies. Inspired by Popper’s philosophy of “conjectures and refutations,” the authors propose a test-time policy evolution mechanism that reframes single-instance reasoning as an online optimization problem. The approach efficiently explores diverse candidate solutions and dynamically updates lightweight, transient LoRA adapters via Group Relative Policy Optimization (GRPO) based on execution feedback, enabling closed-loop, real-time evolution of reasoning strategies. Evaluated on LiveCodeBench, a 4B-parameter model achieves 49.71% accuracy—surpassing both GPT-4o and DeepSeek-V3—despite being over 50 times smaller in scale.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) struggle with complex, long-horizon reasoning due to instability caused by their frozen policy assumption. Current test-time scaling methods treat execution feedback merely as an external signal for filtering or rewriting trajectories, without internalizing it to improve the underlying reasoning strategy. Inspired by Popper's epistemology of"conjectures and refutations,"we argue that intelligence requires real-time evolution of the model's policy through learning from failed attempts. We introduce Policy of Thoughts (PoT), a framework that recasts reasoning as a within-instance online optimization process. PoT first generates diverse candidate solutions via an efficient exploration mechanism, then uses Group Relative Policy Optimization (GRPO) to update a transient LoRA adapter based on execution feedback. This closed-loop design enables dynamic, instance-specific refinement of the model's reasoning priors. Experiments show that PoT dramatically boosts performance: a 4B model achieves 49.71% accuracy on LiveCodeBench, outperforming GPT-4o and DeepSeek-V3 despite being over 50 smaller.
Problem

Research questions and friction points this paper is trying to address.

large language models
reasoning
test-time scaling
policy evolution
execution feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Policy of Thoughts
test-time policy evolution
Group Relative Policy Optimization
online reasoning optimization
LoRA adapter
🔎 Similar Papers
No similar papers found.