π€ AI Summary
Test-time reinforcement learning (TTRL) often relies on unreliable synthetic signalsβe.g., majority voting may converge to spurious high-frequency but incorrect answers.
Method: This paper proposes a label-free self-consistent learning framework: a single model serves jointly as solver and reconstructor; high-quality pseudo-labels are generated by enforcing answer consistency between the original and paraphrased questions; harmonic mean aggregation is introduced to combine multi-view response frequencies, mitigating bias toward popular yet erroneous answers. Input reconstruction is incorporated as a self-supervised auxiliary task, enabling stability-driven signal calibration across original and reconstructed views.
Contribution/Results: Evaluated across 30 benchmarks, the method achieves state-of-the-art performance on 28 tasks, with zero training failures. It significantly improves both accuracy and robustness, establishing a new benchmark for label-free TTRL.
π Abstract
Test-time reinforcement learning (TTRL) offers a label-free paradigm for adapting models using only synthetic signals at inference, but its success hinges on constructing reliable learning signals. Standard approaches such as majority voting often collapse to spurious yet popular answers. We introduce Self-Harmony, a framework built on a simple intuition: the correct answer should remain stable across both an original question and its paraphrase. Self-Harmony operationalizes this by employing a single model in two complementary roles: a Solver to produce answers and a Reframer to rephrase the input. Based on this, we further propose a pseudo-label method: instead of majority voting, it aggregates answer frequencies across these original and reframed views using the harmonic mean. This is a process that naturally selects for solutions stable under reframing, thereby avoiding the common trap of favoring view-dependent, spurious answers. Crucially, this requires no human supervision or auxiliary models. Across diverse reasoning benchmarks, Self-Harmony achieves state-of-the-art results at the label-free test-time setting, ranking first in 28 of 30 settings across multiple methods. Beyond accuracy, it demonstrates unprecedented robustness, with zero training failures in all experiments, underscoring its stability and reliability.