๐ค AI Summary
Reinforcement learning (RL) training suffers from low hardware utilization and constrained throughput due to highly heterogeneous and dynamic workloads. To address this, we propose RLinf, the first system introducing the Macro-to-Micro Flow (M2Flow) paradigm, which automatically decomposes high-level RL policy workflows into fine-grained, schedulable micro-operation streams. RLinf supports adaptive communication, lightweight context switching, and elastic pipeline orchestration, guided by performance profilingโdriven scheduling to optimize spatiotemporal execution structures. Evaluated on inference-oriented and embodied RL tasks, RLinf achieves 1.1รโ2.13ร higher end-to-end training throughput over state-of-the-art systems, while significantly improving GPU utilization and runtime flexibility. By unifying workflow abstraction with system-aware scheduling, RLinf delivers an efficient, scalable systems foundation for large-scale RL training.
๐ Abstract
Reinforcement learning (RL) has demonstrated immense potential in advancing artificial general intelligence, agentic intelligence, and embodied intelligence. However, the inherent heterogeneity and dynamicity of RL workflows often lead to low hardware utilization and slow training on existing systems. In this paper, we present RLinf, a high-performance RL training system based on our key observation that the major roadblock to efficient RL training lies in system flexibility. To maximize flexibility and efficiency, RLinf is built atop a novel RL system design paradigm called macro-to-micro flow transformation (M2Flow), which automatically breaks down high-level, easy-to-compose RL workflows at both the temporal and spatial dimensions, and recomposes them into optimized execution flows. Supported by RLinf worker's adaptive communication capability, we devise context switching and elastic pipelining to realize M2Flow transformation, and a profiling-guided scheduling policy to generate optimal execution plans. Extensive evaluations on both reasoning RL and embodied RL tasks demonstrate that RLinf consistently outperforms state-of-the-art systems, achieving 1.1x-2.13x speedup in end-to-end training throughput.