Optimal Aggregation of LLM and PRM Signals for Efficient Test-Time Scaling

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low utilization efficiency of process reward model (PRM) verification signals in test-time scaling (TTS). We propose a theory-driven weighted aggregation framework that replaces conventional majority voting or fixed-weight strategies with a learnable, model-dependent weighting function. Crucially, we theoretically and empirically demonstrate, for the first time, the essential role of negative weights in PRM signal fusion. Combined with pre-computed calibration, our method enables efficient inference. Experiments across five large language models (LLMs) and seven PRMs show that our approach achieves superior performance to standard weighted majority voting using only 21.3% of the computational overhead, significantly improving both TTS efficiency and response quality. Our core contributions are threefold: (i) establishing an optimal fusion theory for LLM-generated and PRM-verified signals; (ii) introducing a principled negative-weight mechanism; and (iii) proposing a generalizable weight-learning paradigm.

Technology Category

Application Category

📝 Abstract
Process reward models (PRMs) are a cornerstone of test-time scaling (TTS), designed to verify and select the best responses from large language models (LLMs). However, this promise is challenged by recent benchmarks where simple majority voting, which ignores PRM signals, occasionally outperforms standard PRM-based selection. This raises a critical question: How can we effectively utilize verification signals from PRMs for TTS? To address this, we start by developing a theoretical framework for optimally combining signals from both the LLM and the PRM. Our framework reveals that the optimal strategy is a weighted aggregation of responses, a strategy whose effectiveness hinges on estimating weights that capture the complex interplay between the models. Based on our theoretical results, we empirically show that these optimal weighting functions differ significantly across LLM-PRM pairs and, notably, often assign substantial negative weights. Motivated by these insights, we propose efficient pre-computation methods to calibrate these weighting functions. Extensive experiments across 5 LLMs and 7 PRMs demonstrate that our calibration method significantly boosts the TTS efficiency, surpassing the performance of vanilla weighted majority voting while using only $21.3%$ of the computation. Ultimately, our work demonstrates that investing in a more intelligent aggregation strategy can be a more convincing path to performance gains than simply scaling test-time computation.
Problem

Research questions and friction points this paper is trying to address.

Optimally combining LLM and PRM signals for test-time scaling
Addressing when majority voting outperforms PRM-based response selection
Developing efficient aggregation strategies to reduce computation costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed a weighted aggregation framework for LLM and PRM signals
Proposed efficient pre-computation methods to calibrate weighting functions
Achieved superior performance using only 21.3% of computation resources