SPICE: Self-Play In Corpus Environments Improves Reasoning

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing foundation-free self-play methods exhibit diminishing returns in sustained reasoning improvement. To address this, we propose a corpus-based self-play framework that treats large-scale real-world documents as a dynamic environment: a “Challenger” agent automatically discovers diverse reasoning tasks grounded in the corpus, while a “Reasoner” solves them and provides feedback, establishing a closed-loop autonomous evolution mechanism. This work introduces corpus grounding into self-play for the first time, enabling infinite task generation and automatic curriculum design for capability progression. The method integrates document retrieval, adversarial task generation, and dual-role reinforcement learning. Evaluated on multiple mainstream models, it achieves +8.9% absolute gain in mathematical reasoning accuracy and +9.8% in general reasoning—significantly surpassing the performance ceiling of foundation-free self-training. Our approach establishes a novel paradigm for sustainable, self-driven model improvement.

Technology Category

Application Category

📝 Abstract
Self-improving systems require environmental interaction for continuous adaptation. We introduce SPICE (Self-Play In Corpus Environments), a reinforcement learning framework where a single model acts in two roles: a Challenger that mines documents from a large corpus to generate diverse reasoning tasks, and a Reasoner that solves them. Through adversarial dynamics, the Challenger creates an automatic curriculum at the frontier of the Reasoner's capability, while corpus grounding provides the rich, near-inexhaustible external signal necessary for sustained improvement. Unlike existing ungrounded self-play methods that offer more limited benefits, SPICE achieves consistent gains across mathematical (+8.9%) and general reasoning (+9.8%) benchmarks on multiple model families. Our analysis reveals how document grounding is a key ingredient in SPICE to continuously generate its own increasingly challenging goals and achieve them, enabling sustained self-improvement.
Problem

Research questions and friction points this paper is trying to address.

Develops self-play framework for continuous reasoning improvement through corpus interaction
Generates automatic curriculum at model's capability frontier using adversarial dynamics
Solves limitations of ungrounded self-play via document-grounded task generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses self-play reinforcement learning with dual agent roles
Generates automatic curriculum from corpus document grounding
Achieves consistent gains across mathematical and general reasoning
🔎 Similar Papers