AgentRL: Scaling Agentic Reinforcement Learning with a Multi-Turn, Multi-Task Framework

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of efficiently training generalist agents via reinforcement learning (RL) for multi-turn interaction and multi-task scenarios with large language models (LLMs), this paper introduces the first scalable online collaborative RL framework for agent training. Methodologically, it features: (1) an asynchronous generation-training pipeline to improve sample efficiency; (2) a unified function-calling interface and containerized execution environment to enable rapid integration of heterogeneous tasks; and (3) cross-policy sampling for enhanced exploration, coupled with task-specific advantage normalization to stabilize multi-task optimization. Evaluated on five canonical agent benchmarks, the framework significantly outperforms strong baselines—including GPT-5 and Claude-Sonnet-4—while achieving multi-task performance comparable to single-task state-of-the-art models. The codebase and system infrastructure are fully open-sourced and already deployed in production for AutoGLM.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models (LLMs) have sparked growing interest in building generalist agents that can learn through online interactions. However, applying reinforcement learning (RL) to train LLM agents in multi-turn, multi-task settings remains challenging due to lack of scalable infrastructure and stable training algorithms. In this work, we present the AgentRL framework for scalable multi-turn, multi-task agentic RL training. On the infrastructure side, AgentRL features a fully-asynchronous generation-training pipeline for efficient multi-turn RL. To support heterogeneous environment development in multi-task RL, we design a unified function-call based API interface, containerized environment development, and a centralized controller. On the algorithm side, we propose cross-policy sampling to encourage model exploration in multi-turn settings and task advantage normalization to stabilize multi-task training. Experiments show that AgentRL, trained on open LLMs across five agentic tasks, significantly outperforms GPT-5, Clause-Sonnet-4, DeepSeek-R1, and other open-source LLM agents. Multi-task training with AgentRL matches the best results among all task-specific models. AgentRL is open-sourced at https://github.com/THUDM/AgentRL. The algorithm and framework are adopted in building extsc{href{https://autoglm.zhipuai.cn}{AutoGLM}}.
Problem

Research questions and friction points this paper is trying to address.

Scalable infrastructure for multi-turn agent training
Stable algorithms for multi-task reinforcement learning
Efficient exploration in heterogeneous environment settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fully-asynchronous generation-training pipeline for multi-turn RL
Unified API and containerized environment for multi-task RL
Cross-policy sampling and task advantage normalization algorithms
🔎 Similar Papers
No similar papers found.