ARLArena: A Unified Framework for Stable Agentic Reinforcement Learning

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the persistent instability in training autonomous reinforcement learning (ARL) agents, which hinders their scalability and systematic study in complex tasks. To this end, we propose ARLArena, a novel framework that, for the first time, enables fine-grained disentanglement of policy gradient methods into four core design dimensions. Leveraging standardized environments and controlled experiments, ARLArena facilitates a systematic analysis of training dynamics. Building on these insights, we introduce SAMPO, an optimization method that integrates multi-dimensional stabilization mechanisms. SAMPO consistently enhances both training stability and performance across diverse tasks. Our approach establishes a unified analytical perspective and a practical engineering foundation for reliable and reproducible training of large language model–based agents.

Technology Category

Application Category

📝 Abstract
Agentic reinforcement learning (ARL) has rapidly gained attention as a promising paradigm for training agents to solve complex, multi-step interactive tasks. Despite encouraging early results, ARL remains highly unstable, often leading to training collapse. This instability limits scalability to larger environments and longer interaction horizons, and constrains systematic exploration of algorithmic design choices. In this paper, we first propose ARLArena, a stable training recipe and systematic analysis framework that examines training stability in a controlled and reproducible setting. ARLArena first constructs a clean and standardized testbed. Then, we decompose policy gradient into four core design dimensions and assess the performance and stability of each dimension. Through this fine-grained analysis, we distill a unified perspective on ARL and propose SAMPO, a stable agentic policy optimization method designed to mitigate the dominant sources of instability in ARL. Empirically, SAMPO achieves consistently stable training and strong performance across diverse agentic tasks. Overall, this study provides a unifying policy gradient perspective for ARL and offers practical guidance for building stable and reproducible LLM-based agent training pipelines.
Problem

Research questions and friction points this paper is trying to address.

Agentic Reinforcement Learning
training instability
training collapse
scalability
algorithmic design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic Reinforcement Learning
Training Stability
Policy Gradient Decomposition
SAMPO
Reproducible Framework
🔎 Similar Papers
No similar papers found.