Mitigating Premature Exploitation in Particle-based Monte Carlo for Inference-Time Scaling

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address premature exploitation and particle degeneracy in particle filtering (PF) for inference-time search (ITS), caused by overconfident process reward modeling, this work proposes an entropy-guided annealed resampling and forward-modulated prediction mechanism. The former dynamically monitors the entropy of the particle distribution to apply temperature-based annealing, thereby mitigating premature convergence; the latter quantifies path potential via prospective state evaluation, preserving high-value yet currently low-scoring hypotheses. Integrated within the standard PF framework, the method enhances search robustness through dual pathways: diversity preservation and latent-potential identification. Experiments across multiple mathematical reasoning benchmarks demonstrate significant improvements over strong baselines—achieving up to a 50% increase in task-level reward—and notably alleviate particle degradation under resource-constrained conditions.

Technology Category

Application Category

📝 Abstract
Inference-Time Scaling (ITS) improves language models by allocating more computation at generation time. Particle Filtering (PF) has emerged as a strong ITS method for complex mathematical reasoning tasks, but it is vulnerable when guided by process reward models, which often assign overconfident scores early in the reasoning process. This causes PF to suffer from premature exploitation: it myopically commits to locally promising trajectories, prunes potentially correct hypotheses, and converges to suboptimal solutions. This failure mode, known as particle impoverishment, is especially severe under constrained computational budgets. To address this, we analyze the problem and identify two root causes: a lack of diversity in the particle set due to overconfident resampling and consequent inability to assess the potential of a reasoning path. We introduce Entropic Particle Filtering (ePF), an algorithm that integrates two new techniques to solve these issues. The first technique, Entropic Annealing (EA), directly mitigates particle impoverishment by monitoring search diversity via entropy; when diversity drops, it intervenes by dynamically annealing the resampling distribution to preserve exploration. The second, an enhancement called Look-ahead Modulation (LaM), adds a predictive guide to evaluate a state's potential based on its successors. On several challenging math benchmarks, ePF significantly outperforms strong baselines and achieves up to a 50 % relative improvement in task reward. Together, these methods improve PF's resilience by balancing the exploration of diverse solution spaces with the exploitation of high-reward regions, ultimately leading to higher-quality solutions.
Problem

Research questions and friction points this paper is trying to address.

Particle Filtering suffers from premature exploitation in reasoning tasks
Overconfident rewards cause diversity loss and suboptimal solution convergence
Constrained computational budgets worsen particle impoverishment in inference scaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entropic Annealing monitors entropy to preserve particle diversity
Look-ahead Modulation evaluates state potential using successor predictions
Entropic Particle Filtering balances exploration and exploitation in reasoning
🔎 Similar Papers
No similar papers found.