🤖 AI Summary
Existing RAG systems face two key limitations in reinforcement learning (RL)-based search agent training: (1) optimizing solely for retrieval metrics while neglecting generation utility, or (2) requiring full LLM fine-tuning—leading to tight module coupling, poor generalization, and incompatibility with frozen or proprietary models. This paper proposes a lightweight, model-agnostic RL framework featuring the novel “Gain Beyond RAG” reward, which explicitly decouples retrieval and generation modules and enables downstream task-driven search optimization without updating LLM parameters. Our method achieves significant improvements using only 2.4K training samples—outperforming a baseline trained on 70× more data. Evaluated on six general-domain and five medical QA benchmarks, it substantially boosts answer accuracy while improving training efficiency by over 70×.
📝 Abstract
Retrieval-augmented generation (RAG) systems empower large language models (LLMs) to access external knowledge during inference. Recent advances have enabled LLMs to act as search agents via reinforcement learning (RL), improving information acquisition through multi-turn interactions with retrieval engines. However, existing approaches either optimize retrieval using search-only metrics (e.g., NDCG) that ignore downstream utility or fine-tune the entire LLM to jointly reason and retrieve-entangling retrieval with generation and limiting the real search utility and compatibility with frozen or proprietary models. In this work, we propose s3, a lightweight, model-agnostic framework that decouples the searcher from the generator and trains the searcher using a Gain Beyond RAG reward: the improvement in generation accuracy over naive RAG. s3 requires only 2.4k training samples to outperform baselines trained on over 70x more data, consistently delivering stronger downstream performance across six general QA and five medical QA benchmarks.