🤖 AI Summary
This work addresses the challenge of test-time adaptation in novel domains where verifiable reward signals are unavailable, a setting in which existing methods often overfit to superficial patterns due to static query sets. The authors propose TTVS, a framework that enables effective adaptation using only unlabeled test data by dynamically generating semantically equivalent query variants and employing a hybrid exploration strategy driven by both accuracy and consistency to guide online model self-evolution. Notably, TTVS requires no high-quality annotated rewards and consistently outperforms current test-time adaptation approaches across eight mainstream reasoning models, even surpassing supervised reinforcement learning baselines trained with large-scale labeled data.
📝 Abstract
Despite significant advances in Large Reasoning Models (LRMs) driven by reinforcement learning with verifiable rewards (RLVR), this paradigm is fundamentally limited in specialized or novel domains where such supervision is prohibitively expensive or unavailable, posing a key challenge for test-time adaptation. While existing test-time methods offer a potential solution, they are constrained by learning from static query sets, risking overfitting to textual patterns. To address this gap, we introduce Test-Time Variational Synthesis (TTVS), a novel framework that enables LRMs to self-evolve by dynamically augmenting the training stream from unlabeled test queries. TTVS comprises two synergistic modules: (1) Online Variational Synthesis, which transforms static test queries into a dynamic stream of diverse, semantically-equivalent variations, enforcing the model to learn underlying problem logic rather than superficial patterns; (2) Test-time Hybrid Exploration, which balances accuracy-driven exploitation with consistency-driven exploration across synthetic variants. Extensive experiments show TTVS yields superior performance across eight model architectures. Notably, using only unlabeled test-time data, TTVS not only surpasses other test-time adaptation methods but also outperforms state-of-the-art supervised RL-based techniques trained on vast, high-quality labeled data.