🤖 AI Summary
This work addresses the limitations of existing DeepThink methods, which struggle to leverage additional computational resources effectively due to the absence of reliable step-level correctness signals, often amplifying errors and suppressing minority correct solutions during deep reasoning. To overcome this, we propose PRISM, a reasoning algorithm guided by a Process Reward Model (PRM). PRISM functionally decouples the DeepThink system and introduces an energy landscape defined by the PRM, integrating score-guided resampling with particle-based population optimization. This approach enables efficient and robust reasoning evolution while preserving solution diversity. Empirical results demonstrate state-of-the-art performance, achieving 90.0%, 75.4%, and 71.4% accuracy on AIME25, HMMT25, and GPQA Diamond benchmarks, respectively—matching or surpassing GPT-OSS-120B and establishing a new Pareto frontier in the computation–accuracy trade-off.
📝 Abstract
DEEPTHINK methods improve reasoning by generating, refining, and aggregating populations of candidate solutions, which enables strong performance on complex mathematical and scientific tasks. However, existing frameworks often lack reliable correctness signals during inference, which creates a population-enhancement bottleneck where deeper deliberation amplifies errors, suppresses correct minority solutions, and yields weak returns to additional compute. In this paper, we introduce a functional decomposition of DEEPTHINK systems and propose PRISM, a Process Reward Model (PRM)-guided inference algorithm that uses step-level verification to guide both population refinement and solution aggregation. During refinement, PRISM treats candidate solutions as particles in a PRM-defined energy landscape and reshapes the population through score-guided resampling and stochastic refinement, which concentrates probability mass on higher-quality reasoning while preserving diversity. Across mathematics and science benchmarks, PRISM is competitive with or outperforms existing DEEPTHINK methods, reaching 90.0%, 75.4%, and 71.4% with gpt-oss-20b on AIME25, HMMT25, and GPQA Diamond, respectively, while matching or exceeding gpt-oss-120b. Additionally, our analysis shows that PRISM produces consistent net-directional correction during refinement, remains reliable when the initial population contains few correct candidates, and often lies on the compute-accuracy Pareto frontier.