🤖 AI Summary
This work addresses the inefficiency and training instability of existing test-time adaptation methods in heterogeneous reasoning tasks, which stem from their use of a uniform optimization objective. To overcome these limitations, we propose a self-curriculum framework guided by instance difficulty and consensus. Sample difficulty is dynamically assessed through the consistency of reasoning trajectories: high-consensus instances are refined via pseudo-label supervised fine-tuning, while low-consensus instances are optimized using reinforcement learning augmented with consensus-based regularization. Our approach is the first to integrate instance-level epistemic uncertainty and consensus mechanisms into test-time adaptation, enabling difficulty-aware dynamic strategy allocation. Experiments across multiple mathematical and general reasoning benchmarks demonstrate significant improvements over state-of-the-art methods, achieving higher accuracy, lower variance, and substantially reduced computational cost and training time.
📝 Abstract
Test-time adaptation offers a promising avenue for improving reasoning performance in large language models without additional supervision, but existing approaches often apply a uniform optimization objective across all inputs, leading to inefficient or unstable adaptation on heterogeneous reasoning problems. We propose DiSCTT, a difficulty-aware, consensus-guided self-curriculum framework that dynamically allocates test-time optimization strategies based on instance-level epistemic uncertainty estimated from agreement among sampled reasoning trajectories. Inputs with high consensus are consolidated via supervised fine-tuning using majority-agreed solutions as pseudo-labels, while low-consensus inputs are optimized via reinforcement learning with a consensus-regularized objective that encourages diversity under relevance constraints. Across a broad suite of mathematical and general reasoning benchmarks, DiSCTT consistently outperforms strong test-time adaptation baselines, achieving higher accuracy with reduced variance and substantially lower computation and wall-clock training times. These results demonstrate that explicitly accounting for instance difficulty and uncertainty enables more stable, efficient, and effective test-time adaptation for reasoning models.