🤖 AI Summary
This work addresses the challenge of policy optimization stagnation in multi-turn tool-augmented reasoning, where sparse rewards and minimal intra-group reward variance hinder effective learning. To overcome this, the authors propose Reward-Conditioned Trajectory Policy (RCTP), which models exploration as a controllable guidance task by introducing discrete reward tokens. Within the GRPO framework, RCTP enhances intra-group trajectory diversity to improve advantage estimation. The approach integrates supervised fine-tuning (SFT) with GRPO-based reinforcement learning, leveraging special reward-target tokens for fine-tuning and generating high-quality trajectories through reward-conditioned rollouts. Evaluated on the BFCLv4 multi-turn benchmark, RCTP significantly outperforms existing baselines, with the Qwen-2.5-7B-Instruct model surpassing all closed-source API counterparts.
📝 Abstract
Multi-turn tool calling is challenging for Large Language Models (LLMs) because rewards are sparse and exploration is expensive. A common recipe, SFT followed by GRPO, can stall when within-group reward variation is low (e.g., more rollouts in a group receive the all 0 or all 1 reward), making the group-normalized advantage uninformative and yielding vanishing updates. To address this problem, we propose RC-GRPO (Reward-Conditioned Group Relative Policy Optimization), which treats exploration as a controllable steering problem via discrete reward tokens. We first fine-tune a Reward-Conditioned Trajectory Policy (RCTP) on mixed-quality trajectories with reward goal special tokens (e.g.,<|high_reward|>,<|low_reward|>) injected into the prompts, enabling the model to learn how to generate distinct quality trajectories on demand. Then during RL, we sample diverse reward tokens within each GRPO group and condition rollouts on the sampled token to improve within-group diversity, improving advantage gains. On the Berkeley Function Calling Leaderboard v4 (BFCLv4) multi-turn benchmark, our method yields consistently improved performance than baselines, and the performance on Qwen-2.5-7B-Instruct even surpasses all closed-source API models.