🤖 AI Summary
This work proposes a reinforcement learning–based conversational search agent that addresses the limitations of conventional static rewrite-retrieve-generate pipelines, which struggle to dynamically capture evolving user intent across multi-turn interactions and jointly optimize retrieval and generation. By introducing a novel joint reasoning and retrieval mechanism into multi-turn conversational search, the agent leverages a tailored reward function to guide a large language model in adaptively reformulating queries and co-optimizing retrieval and generation actions within cross-turn context. Evaluated on four mainstream conversational search benchmarks, the proposed approach significantly outperforms strong existing baselines, demonstrating its effectiveness in both exploratory and goal-oriented dialogues.
📝 Abstract
Large Language Models (LLMs) have become a popular interface for human-AI interaction, supporting information seeking and task assistance through natural, multi-turn dialogue. To respond to users within multi-turn dialogues, the context-dependent user intent evolves across interactions, requiring contextual interpretation, query reformulation, and dynamic coordination between retrieval and generation. Existing studies usually follow static rewrite, retrieve, and generate pipelines, which optimize different procedures separately and overlook the mixed-initiative action optimization simultaneously. Although the recent developments in deep search agents demonstrate the effectiveness in jointly optimizing retrieval and generation via reasoning, these approaches focus on single-turn scenarios, which might lack the ability to handle multi-turn interactions. We introduce a conversational agent that interleaves search and reasoning across turns, enabling exploratory and adaptive behaviors learned through reinforcement learning (RL) training with tailored rewards towards evolving user goals. The experimental results across four widely used conversational benchmarks demonstrate the effectiveness of our methods by surpassing several existing strong baselines.