Search Self-play: Pushing the Frontier of Agent Capability without Supervision

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RL with Verifiable Rewards (RLVR) approaches rely on manually crafted tasks and answers, incurring high annotation costs and poor scalability; task synthesis methods, meanwhile, lack controllable difficulty. This paper proposes the Search Self-Play (SSP) framework for unsupervised agent self-play training: a single agent jointly serves as both task generator and solver. Task solvability and answer correctness are ensured via Retrieval-Augmented Generation (RAG), while a closed-loop training system is established through multi-turn search engine queries, trajectory knowledge collection, and external verification. SSP enables automatic, difficulty-controllable task evolution and continuous capability improvement. It significantly outperforms baselines across multiple search-oriented benchmarks, supports both zero-shot initialization and continual reinforcement learning, and requires no human annotation—achieving strong scalability and practical applicability.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has become the mainstream technique for training LLM agents. However, RLVR highly depends on well-crafted task queries and corresponding ground-truth answers to provide accurate rewards, which requires massive human efforts and hinders the RL scaling processes, especially under agentic scenarios. Although a few recent works explore task synthesis methods, the difficulty of generated agentic tasks can hardly be controlled to provide effective RL training advantages. To achieve agentic RLVR with higher scalability, we explore self-play training for deep search agents, in which the learning LLM utilizes multi-turn search engine calling and acts simultaneously as both a task proposer and a problem solver. The task proposer aims to generate deep search queries with well-defined ground-truth answers and increasing task difficulty. The problem solver tries to handle the generated search queries and output the correct answer predictions. To ensure that each generated search query has accurate ground truth, we collect all the searching results from the proposer's trajectory as external knowledge, then conduct retrieval-augmentation generation (RAG) to test whether the proposed query can be correctly answered with all necessary search documents provided. In this search self-play (SSP) game, the proposer and the solver co-evolve their agent capabilities through both competition and cooperation. With substantial experimental results, we find that SSP can significantly improve search agents'performance uniformly on various benchmarks without any supervision under both from-scratch and continuous RL training setups. The code is at https://github.com/Alibaba-Quark/SSP.
Problem

Research questions and friction points this paper is trying to address.

Overcoming RLVR's dependency on human-crafted tasks and answers
Controlling difficulty of generated agentic tasks for effective RL training
Enabling scalable self-play training for deep search agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-play training for deep search agents
Task proposer generates queries with increasing difficulty
Retrieval-augmentation generation ensures accurate ground truth
🔎 Similar Papers
No similar papers found.
H
Hongliang Lu
Quark LLM Team, Alibaba Group
Yuhang Wen
Yuhang Wen
Sun Yat-sen University
computer visionagentreinforcement learning
Pengyu Cheng
Pengyu Cheng
Alibaba Group
machine learningnatural language processing
R
Ruijin Ding
Quark LLM Team, Alibaba Group
H
Haotian Xu
Quark LLM Team, Alibaba Group
J
Jiaqi Guo
Quark LLM Team, Alibaba Group
Chutian Wang
Chutian Wang
HKU/IC/USTB
Neuromorphic ImagingComputational ImagingWavefront Sensing
H
Haonan Chen
Quark LLM Team, Alibaba Group
X
Xiaoxi Jiang
Quark LLM Team, Alibaba Group
G
Guanjun Jiang
Quark LLM Team, Alibaba Group