π€ AI Summary
This work proposes PRISM, a novel framework addressing the challenge of unstable post-training in large language models when human annotations or verifiable reward signals are unavailable. PRISM innovatively integrates a process reward model (PRM) with the modelβs self-confidence signals, replacing conventional reliance on internal consistency metrics such as entropy. By combining unsupervised post-training with a consistency-guided mechanism, PRISM effectively mitigates the issue of confidence inflation. Experimental results demonstrate that PRISM significantly enhances training stability and improves downstream task performance, offering a new paradigm for model alignment in label-scarce scenarios.
π Abstract
Current techniques for post-training Large Language Models (LLMs) rely either on costly human supervision or on external verifiers to boost performance on tasks such as mathematical reasoning and code generation. However, as LLMs improve their problem-solving, any further improvement will potentially require high-quality solutions to difficult problems that are not available to humans. As a result, learning from unlabeled data is becoming increasingly attractive in the research community. Existing methods extract learning signal from a model's consistency, either by majority voting or by converting the model's internal confidence into reward. Although internal consistency metric such as entropy or self-certainty require no human intervention, as we show in this work, these are unreliable signals for large-scale and long-term training. To address the unreliability, we propose PRISM, a unified training framework that uses a Process Reward Model (PRM) to guide learning alongside model's internal confidence in the absence of ground-truth labels. We show that effectively combining PRM with self-certainty can lead to both stable training and better test-time performance, and also keep the model's internal confidence in check. Code available at https://github.com/ghimiremukesh/PRISM.