WIST: Web-Grounded Iterative Self-Play Tree for Domain-Targeted Reasoning Improvement

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-improvement methods for language models are often hindered by capability drift or constrained by static corpora, lacking openness and flexibility. This work proposes WIST, a novel framework that enables fully open-web-based iterative self-play training. WIST dynamically constructs domain-specific reasoning trees and integrates a challenger-solver adversarial mechanism, a verifiable reward system, Bayesian posterior updating of node utilities, and adaptive curriculum exploration to directionally enhance reasoning capabilities—without relying on any pre-defined corpus. Experiments demonstrate that WIST consistently improves performance across four base models by an average of 9.7–9.8 points, with gains of +14.79 in medical reasoning and +5.28 on PhyBench, significantly outperforming baselines based on endogenous evolution and corpus-constrained self-play.

Technology Category

Application Category

📝 Abstract
Recent progress in reinforcement learning with verifiable rewards (RLVR) offers a practical path to self-improvement of language models, but existing methods face a key trade-off: endogenous self-play can drift over iterations, while corpus-grounded approaches rely on curated data environments. We present \textbf{WIST}, a \textbf{W}eb-grounded \textbf{I}terative \textbf{S}elf-play \textbf{T}ree framework for domain-targeted reasoning improvement that learns directly from the open web without requiring any pre-arranged domain corpus. WIST incrementally expands a domain tree for exploration, and retrieves and cleans path-consistent web corpus to construct a controllable training environment. It then performs Challenger--Solver self-play with verifiable rewards, and feeds learnability signals back to update node posteriors and guide subsequent exploration through an adaptive curriculum. Across four backbones, WIST consistently improves over the base models and typically outperforms both purely endogenous self-evolution and corpus-grounded self-play baselines, with the Overall gains reaching \textbf{+9.8} (\textit{Qwen3-4B-Base}) and \textbf{+9.7} (\textit{OctoThinker-8B}). WIST is also domain-steerable, improving \textit{Qwen3-8B-Base} by \textbf{+14.79} in medicine and \textit{Qwen3-4B-Base} by \textbf{+5.28} on PhyBench. Ablations further confirm the importance of WIST's key components for stable open-web learning. Our Code is available at https://github.com/lfy-123/WIST.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning with verifiable rewards
self-play
domain-targeted reasoning
open-web learning
language model self-improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Web-grounded learning
Iterative self-play
Domain-targeted reasoning
Verifiable rewards
Adaptive curriculum
🔎 Similar Papers
No similar papers found.