Beyond Stochastic Exploration: What Makes Training Data Valuable for Agentic Search

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reinforcement learning–based search agents often suffer from inefficient reasoning trajectories and unstable training due to their reliance on random exploration and sparse rewards. This work proposes the Hierarchical Experience (HiExp) framework, which introduces structured hierarchical experience into agent training for the first time. HiExp extracts structured knowledge from raw trajectories through contrastive learning and multi-level clustering, then leverages this experience via alignment-aware training to guide the exploration process. This paradigm shift—from random exploration to experience-driven strategic search—significantly improves performance across multiple challenging search and mathematical reasoning benchmarks. Moreover, the approach demonstrates strong generalization capabilities across diverse tasks and algorithms.
📝 Abstract
Reinforcement learning (RL) has become an effective approach for advancing the reasoning capabilities of large language models (LLMs) through the strategic integration of external search engines. However, current RL-based search agents often rely on a process of stochastic exploration guided by carefully crafted outcome rewards, leading to inefficient reasoning trajectories and unstable training. To address these issues, we propose a novel framework, Hierarchical Experience (HiExp), to enhance the performance and training stability of search agents. Specifically, we extract empirical knowledge through contrastive analysis and a multi-level clustering mechanism, transforming raw reasoning trajectories into hierarchical experience knowledge. By leveraging experience-aligned training, we effectively regularize stochastic exploration, evolving it into a strategic and experience-driven search process. Extensive evaluations on multiple complex agentic search and mathematical reasoning benchmarks demonstrate that our approach not only achieves substantial performance gains but also exhibits strong cross-task and cross-algorithm generalization.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
stochastic exploration
training stability
reasoning trajectories
agentic search
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Experience
experience-aligned training
agentic search
reinforcement learning
multi-level clustering
🔎 Similar Papers
No similar papers found.