๐ค AI Summary
This work addresses the limitations of existing reinforcement learningโbased deep search agents, which rely on binary outcome rewards and are prone to shortcut behaviors and hallucinations, thereby compromising reasoning completeness and factual accuracy. To overcome these issues, the authors propose Citation-aware Reward Rules (CaRR), which decompose complex questions into verifiable single-hop subtasks and require agents to identify implicit entities, provide correct citations, and construct coherent evidence chains. They further introduce Citation-aware Group Relative Policy Optimization (C-GRPO), a novel algorithm that jointly optimizes process- and outcome-based rewards. By incorporating a fine-grained, citation-aware scoring mechanism for the first time, this approach significantly outperforms conventional methods across multiple deep search benchmarks, enhancing open-domain deep research capabilities while effectively mitigating hallucinations and improving the traceability and robustness of reasoning.
๐ Abstract
Reinforcement learning (RL) has emerged as a critical technique for enhancing LLM-based deep search agents. However, existing approaches primarily rely on binary outcome rewards, which fail to capture the comprehensiveness and factuality of agents'reasoning process, and often lead to undesirable behaviors such as shortcut exploitation and hallucinations. To address these limitations, we propose \textbf{Citation-aware Rubric Rewards (CaRR)}, a fine-grained reward framework for deep search agents that emphasizes reasoning comprehensiveness, factual grounding, and evidence connectivity. CaRR decomposes complex questions into verifiable single-hop rubrics and requires agents to satisfy these rubrics by explicitly identifying hidden entities, supporting them with correct citations, and constructing complete evidence chains that link to the predicted answer. We further introduce \textbf{Citation-aware Group Relative Policy Optimization (C-GRPO)}, which combines CaRR and outcome rewards for training robust deep search agents. Experiments show that C-GRPO consistently outperforms standard outcome-based RL baselines across multiple deep search benchmarks. Our analysis also validates that C-GRPO effectively discourages shortcut exploitation, promotes comprehensive, evidence-grounded reasoning, and exhibits strong generalization to open-ended deep research tasks. Our code and data are available at https://github.com/THUDM/CaRR.