🤖 AI Summary
To address the limitations of large language models (LLMs) in long-context reasoning—specifically, scarce human annotations and unreliable reward signals—this paper proposes an unsupervised three-role self-play reinforcement learning framework comprising a Questioner, a Responder, and a Verifier. These agents engage in a closed-loop adversarial game, integrating semantic equivalence verification, adaptive-difficulty curriculum learning, and capability-aware dynamic reward shaping to enable continuous self-improvement. Crucially, the method eliminates reliance on human annotations or programmable ground-truth verifiers, enhancing scalability and generalizability. Evaluated on six long-context benchmarks, Qwen3-30B-A3B-Thinking achieves an average pass@8 improvement of 7.6 points over same-scale supervised fine-tuning baselines. The core contribution is the first integration of multi-agent self-play with semantic-verification-driven curriculum learning for long-context reasoning training, establishing a novel paradigm for annotation-free, reward-robust LLM optimization.
📝 Abstract
Progress in long-context reasoning for large language models (LLMs) has lagged behind other recent advances. This gap arises not only from the intrinsic difficulty of processing long texts, but also from the scarcity of reliable human annotations and programmatically verifiable reward signals. In this paper, we propose SPELL, a multi-role self-play reinforcement learning framework that enables scalable, label-free optimization for long-context reasoning. SPELL integrates three cyclical roles-questioner, responder, and verifier-within a single model to enable continual self-improvement. The questioner generates questions from raw documents paired with reference answers; the responder learns to solve these questions based on the documents; and the verifier evaluates semantic equivalence between the responder's output and the questioner's reference answer, producing reward signals to guide continual training. To stabilize training, we introduce an automated curriculum that gradually increases document length and a reward function that adapts question difficulty to the model's evolving capabilities. Extensive experiments on six long-context benchmarks show that SPELL consistently improves performance across diverse LLMs and outperforms equally sized models fine-tuned on large-scale annotated data. Notably, SPELL achieves an average 7.6-point gain in pass@8 on the strong reasoning model Qwen3-30B-A3B-Thinking, raising its performance ceiling and showing promise for scaling to even more capable models.