🤖 AI Summary
This work addresses the challenge of optimizing reinforcement learning (RL) in open-domain agent tasks, where sparse and coarse reward signals hinder effective learning. To overcome this limitation, the authors propose ArenaRL, a novel framework that replaces traditional point-wise rewards with intra-group relative rankings derived from multi-level scoring criteria and a single-elimination tournament mechanism. This approach efficiently approximates the accuracy of full pairwise comparisons—typically requiring O(N²) complexity—with only O(N) computational overhead. ArenaRL further establishes the first comprehensive benchmark for open-domain agent workflows, spanning from supervised fine-tuning to evaluation, through two new environments: Open-Travel and Open-DeepResearch. The framework also introduces process-aware pairwise evaluation and an adversarial arena design. Experimental results demonstrate that ArenaRL significantly outperforms standard RL baselines, yielding more robust and higher-quality solutions on complex tasks.
📝 Abstract
Reinforcement learning has substantially improved the performance of LLM agents on tasks with verifiable outcomes, but it still struggles on open-ended agent tasks with vast solution spaces (e.g., complex travel planning). Due to the absence of objective ground-truth for these tasks, current RL algorithms largely rely on reward models that assign scalar scores to individual responses. We contend that such pointwise scoring suffers from an inherent discrimination collapse: the reward model struggles to distinguish subtle advantages among different trajectories, resulting in scores within a group being compressed into a narrow range. Consequently, the effective reward signal becomes dominated by noise from the reward model, leading to optimization stagnation. To address this, we propose ArenaRL, a reinforcement learning paradigm that shifts from pointwise scalar scoring to intra-group relative ranking. ArenaRL introduces a process-aware pairwise evaluation mechanism, employing multi-level rubrics to assign fine-grained relative scores to trajectories. Additionally, we construct an intra-group adversarial arena and devise a tournament-based ranking scheme to obtain stable advantage signals. Empirical results confirm that the built seeded single-elimination scheme achieves nearly equivalent advantage estimation accuracy to full pairwise comparisons with O(N^2) complexity, while operating with only O(N) complexity, striking an optimal balance between efficiency and precision. Furthermore, to address the lack of full-cycle benchmarks for open-ended agents, we build Open-Travel and Open-DeepResearch, two high-quality benchmarks featuring a comprehensive pipeline covering SFT, RL training, and multi-dimensional evaluation. Extensive experiments show that ArenaRL substantially outperforms standard RL baselines, enabling LLM agents to generate more robust solutions for complex real-world tasks.