🤖 AI Summary
This work addresses the inefficiency and suboptimal performance of autonomous deep research agents when handling ambiguous queries. To mitigate this, the authors propose an agent endowed with proactive clarification capabilities, which constructs a scalable “shallow-to-deep” intent refinement graph to generate high-quality dialogue data. A two-stage reinforcement learning strategy—comprising offline fine-tuning followed by online interaction with a user simulator—is designed to accurately identify user intent prior to initiating long-horizon research tasks. Experimental results demonstrate that the proposed approach significantly improves intent accuracy and downstream task performance, outperforming both the clarification modules of existing closed-source deep research agents and baseline proactive large language models.
📝 Abstract
Deep Research (DR) agents extend Large Language Models (LLMs) beyond parametric knowledge by autonomously retrieving and synthesizing evidence from large web corpora into long-form reports, enabling a long-horizon agentic paradigm. However, unlike real-time conversational assistants, DR is computationally expensive and time-consuming, creating an autonomy-interaction dilemma: high autonomy on ambiguous user queries often leads to prolonged execution with unsatisfactory outcomes. To address this, we propose IntentRL, a framework that trains proactive agents to clarify latent user intents before starting long-horizon research. To overcome the scarcity of open-ended research data, we introduce a scalable pipeline that expands a few seed samples into high-quality dialogue turns via a shallow-to-deep intent refinement graph. We further adopt a two-stage reinforcement learning (RL) strategy: Stage I applies RL on offline dialogues to efficiently learn general user-interaction behavior, while Stage II uses the trained agent and a user simulator for online rollouts to strengthen adaptation to diverse user feedback. Extensive experiments show that IntentRL significantly improves both intent hit rate and downstream task performance, outperforming the built-in clarify modules of closed-source DR agents and proactive LLM baselines.