๐ค AI Summary
This work addresses the challenge in reinforcement learning (RL) training of multi-turn LLM agents, where existing systems tightly couple sandbox rollout logic with training pipelines, resulting in poor maintainability and limited transferability. To resolve this, the authors propose a โRollout-as-a-Serviceโ (RaaS) architecture that decouples rollout execution from the RL training loop, offering a unified and scalable API service to manage the full lifecycle of rollouts. The system incorporates a standardized, rootless high-performance computing (HPC) sandbox environment capable of supporting diverse tasks across domains. Integrated into NVIDIA NeMo Gym, the framework demonstrates effective RL training on software engineering, mathematics, STEM, and programming benchmarks, and has been publicly released as open-source software.
๐ Abstract
Multi-turn LLM agents are increasingly important for solving complex, interactive tasks, and reinforcement learning (RL) is a key ingredient for improving their long-horizon behavior. However, RL training requires generating large numbers of sandboxed rollout trajectories, and existing infrastructures often couple rollout orchestration with the training loop, making systems hard to migrate and maintain. Under the rollout-as-a-service philosophy, we present ProRL Agent , a scalable infrastructure that serves the full agentic rollout lifecycle through an API service. ProRL Agent also provides standardized and extensible sandbox environments that support diverse agentic tasks in rootless HPC settings. We validate ProRL Agent through RL training on software engineering, math, STEM, and coding tasks. ProRL Agent is open-sourced and integrated as part of NVIDIA NeMo Gym.