๐ค AI Summary
This work addresses the limitations of existing paper retrieval methods, which rely on rigid, fixed pipelines and struggle with complex, conditional queries. The authors formulate search as a sequential decision-making process and propose a reinforcement learningโbased autonomous agent framework that dynamically invokes search and expansion tools in a context-aware manner. To reconcile the mismatch between token-level optimization and sequence-level interaction in multi-turn tasks, they introduce Proximal Sequence Policy Optimization (PSPO), a process-aware sequence-level policy optimization method. Evaluated on both synthetic and real-world benchmarks, the proposed approach substantially outperforms workflow-driven and conventional reinforcement learning baselines, achieving significant improvements in recall and relevance metrics.
๐ Abstract
Academic paper search is a fundamental task in scientific research, yet most existing approaches rely on rigid, predefined workflows that struggle with complex, conditional queries. To address this limitation, we propose PaperScout, an autonomous agent that reformulates paper search as a sequential decision-making process. Unlike static workflows, PaperScout dynamically decides whether, when, and how to invoke search and expand tools based on accumulated retrieval context. However, training such agents presents a fundamental challenge: standard reinforcement learning methods, typically designed for single-turn tasks, suffer from a granularity mismatch when applied to multi-turn agentic tasks, where token-level optimization diverges from the granularity of sequence-level interactions, leading to noisy credit assignment. We introduce Proximal Sequence Policy Optimization (PSPO), a process-aware, sequence-level policy optimization method that aligns optimization with agent-environment interaction. Comprehensive experiments on both synthetic and real-world benchmarks demonstrate that PaperScout significantly outperforms strong workflow-driven and RL baselines in both recall and relevance, validating the effectiveness of our adaptive agentic framework and optimization strategy.