Self-Harmony: Learning to Harmonize Self-Supervision and Self-Play in Test-Time Reinforcement Learning

πŸ“… 2025-11-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Test-time reinforcement learning (TTRL) often relies on unreliable synthetic signalsβ€”e.g., majority voting may converge to spurious high-frequency but incorrect answers. Method: This paper proposes a label-free self-consistent learning framework: a single model serves jointly as solver and reconstructor; high-quality pseudo-labels are generated by enforcing answer consistency between the original and paraphrased questions; harmonic mean aggregation is introduced to combine multi-view response frequencies, mitigating bias toward popular yet erroneous answers. Input reconstruction is incorporated as a self-supervised auxiliary task, enabling stability-driven signal calibration across original and reconstructed views. Contribution/Results: Evaluated across 30 benchmarks, the method achieves state-of-the-art performance on 28 tasks, with zero training failures. It significantly improves both accuracy and robustness, establishing a new benchmark for label-free TTRL.

Technology Category

Application Category

πŸ“ Abstract
Test-time reinforcement learning (TTRL) offers a label-free paradigm for adapting models using only synthetic signals at inference, but its success hinges on constructing reliable learning signals. Standard approaches such as majority voting often collapse to spurious yet popular answers. We introduce Self-Harmony, a framework built on a simple intuition: the correct answer should remain stable across both an original question and its paraphrase. Self-Harmony operationalizes this by employing a single model in two complementary roles: a Solver to produce answers and a Reframer to rephrase the input. Based on this, we further propose a pseudo-label method: instead of majority voting, it aggregates answer frequencies across these original and reframed views using the harmonic mean. This is a process that naturally selects for solutions stable under reframing, thereby avoiding the common trap of favoring view-dependent, spurious answers. Crucially, this requires no human supervision or auxiliary models. Across diverse reasoning benchmarks, Self-Harmony achieves state-of-the-art results at the label-free test-time setting, ranking first in 28 of 30 settings across multiple methods. Beyond accuracy, it demonstrates unprecedented robustness, with zero training failures in all experiments, underscoring its stability and reliability.
Problem

Research questions and friction points this paper is trying to address.

Constructing reliable learning signals without human supervision
Avoiding spurious answers from majority voting methods
Achieving stable solutions across original and paraphrased inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single model acts as both Solver and Reframer
Harmonic mean aggregates answers across paraphrased views
No human supervision or auxiliary models required
πŸ”Ž Similar Papers
No similar papers found.
R
Ru Wang
The University of Tokyo
W
Wei Huang
RIKEN Center for Advanced Intelligence Project
Q
Qi Cao
The University of Tokyo
Yusuke Iwasawa
Yusuke Iwasawa
The University of Tokyo
deep learningtransfer learningfoundation modelmeta learning
Y
Yutaka Matsuo
The University of Tokyo
Jiaxian Guo
Jiaxian Guo
Google Research
Efficient Foundation ModelReinforcement LearningCausality