TTCS: Test-Time Curriculum Synthesis for Self-Evolving

πŸ“… 2026-01-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of existing test-time training methods, which struggle to generate high-quality pseudo-labels in complex reasoning tasks and suffer from instability due to test set size constraints. To overcome these challenges, we propose the TTCS framework, which introduces a novel test-time curriculum synthesis mechanism. TTCS employs a co-evolving question-answer synthesizer and reasoning solver to dynamically construct a curriculum of problems ordered from easy to hard, guided by a self-consistency reward that drives model self-improvement. This approach enables dynamic alignment between generated problems and the model’s evolving capabilities, significantly enhancing training stability and generalization. Extensive experiments demonstrate substantial performance gains on multiple mathematical reasoning benchmarks, with consistent improvements across diverse large language model backbones and generalization to broader tasks.

Technology Category

Application Category

πŸ“ Abstract
Test-Time Training offers a promising way to improve the reasoning ability of large language models (LLMs) by adapting the model using only the test questions. However, existing methods struggle with difficult reasoning problems for two reasons: raw test questions are often too difficult to yield high-quality pseudo-labels, and the limited size of test sets makes continuous online updates prone to instability. To address these limitations, we propose TTCS, a co-evolving test-time training framework. Specifically, TTCS initializes two policies from the same pretrained model: a question synthesizer and a reasoning solver. These policies evolve through iterative optimization: the synthesizer generates progressively challenging question variants conditioned on the test questions, creating a structured curriculum tailored to the solver's current capability, while the solver updates itself using self-consistency rewards computed from multiple sampled responses on both original test and synthetic questions. Crucially, the solver's feedback guides the synthesizer to generate questions aligned with the model's current capability, and the generated question variants in turn stabilize the solver's test-time training. Experiments show that TTCS consistently strengthens the reasoning ability on challenging mathematical benchmarks and transfers to general-domain tasks across different LLM backbones, highlighting a scalable path towards dynamically constructing test-time curricula for self-evolving. Our code and implementation details are available at https://github.com/XMUDeepLIT/TTCS.
Problem

Research questions and friction points this paper is trying to address.

Test-Time Training
Reasoning Ability
Pseudo-Labels
Curriculum Learning
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

test-time training
curriculum synthesis
self-evolving LLMs
question generation
self-consistency reward
πŸ”Ž Similar Papers
No similar papers found.