From the Inside Out: Progressive Distribution Refinement for Confidence Calibration

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of distributional shift between training and testing phases in reinforcement learning, as well as reward hacking induced by voting-based test-time training. To mitigate these issues, the authors propose a progressive self-reward optimization mechanism that incorporates a prior over confidence distributions, replacing conventional single-query rollback strategies. Additionally, a diversity-oriented penalty term is introduced to suppress reward manipulation. By jointly optimizing model capabilities and self-generated reward signals, the method significantly enhances performance across multiple benchmarks and model architectures, while effectively improving confidence calibration and generalization.

Technology Category

Application Category

📝 Abstract
Leveraging the model's internal information as the self-reward signal in Reinforcement Learning (RL) has received extensive attention due to its label-free nature. While prior works have made significant progress in applying the Test-Time Scaling (TTS) strategies to RL, the discrepancy in internal information between test and training remains inadequately addressed. Moreover, Test-Time Training based on voting-based TTS strategies often suffers from reward hacking problems. To address these issues, we propose DistriTTRL, which leverages the distribution prior of the model's confidence during RL to progressively optimize the reward signal, rather than relying solely on single-query rollouts. Additionally, we mitigate the phenomenon of consistent reward hacking caused by the voting-based TTS strategies through diversity-targeted penalties. Benefiting from this training mechanism where model capability and self-reward signals complement each other, and the mitigation of reward hacking, DistriTTRL has achieved significant performance improvements across multiple models and benchmarks.
Problem

Research questions and friction points this paper is trying to address.

confidence calibration
reinforcement learning
test-time scaling
reward hacking
distribution discrepancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

confidence calibration
test-time training
reward hacking
distribution refinement
reinforcement learning
🔎 Similar Papers
No similar papers found.