AsyncFlow: An Asynchronous Streaming RL Framework for Efficient LLM Post-Training

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing task-cooperative RL frameworks suffer from poor scalability, while task-isolated frameworks face challenges including complex data flow, resource idleness, and load imbalance; moreover, most current solutions are tightly coupled with specific LLM training/inference engines, hindering customization. To address these issues, we propose AsyncFlow—a novel asynchronous streaming RL framework. Our method decouples computation tasks from resource scheduling, introduces a distributed data storage and transmission module enabling fine-grained scheduling and full-pipeline overlap, and incorporates a producer-consumer asynchronous workflow with controllable-delay parameter updates to minimize compute underutilization. AsyncFlow achieves loose coupling with arbitrary training/inference engines via service-oriented interfaces. Experimental results demonstrate that AsyncFlow improves average throughput by 1.59× over the best baseline, significantly enhancing resource utilization and training scalability.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has become a pivotal technology in the post-training phase of large language models (LLMs). Traditional task-colocated RL frameworks suffer from significant scalability bottlenecks, while task-separated RL frameworks face challenges in complex dataflows and the corresponding resource idling and workload imbalance. Moreover, most existing frameworks are tightly coupled with LLM training or inference engines, making it difficult to support custom-designed engines. To address these challenges, we propose AsyncFlow, an asynchronous streaming RL framework for efficient post-training. Specifically, we introduce a distributed data storage and transfer module that provides a unified data management and fine-grained scheduling capability in a fully streamed manner. This architecture inherently facilitates automated pipeline overlapping among RL tasks and dynamic load balancing. Moreover, we propose a producer-consumer-based asynchronous workflow engineered to minimize computational idleness by strategically deferring parameter update process within staleness thresholds. Finally, the core capability of AsynFlow is architecturally decoupled from underlying training and inference engines and encapsulated by service-oriented user interfaces, offering a modular and customizable user experience. Extensive experiments demonstrate an average of 1.59 throughput improvement compared with state-of-the-art baseline. The presented architecture in this work provides actionable insights for next-generation RL training system designs.
Problem

Research questions and friction points this paper is trying to address.

Addresses scalability bottlenecks in RL for LLM post-training
Resolves complex dataflows and resource idling in RL frameworks
Decouples RL framework from LLM engines for customization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed data storage and transfer module
Producer-consumer-based asynchronous workflow
Decoupled architecture with service-oriented interfaces
🔎 Similar Papers
No similar papers found.
Zhenyu Han
Zhenyu Han
Ph.D, Department of Electronic Engineering, Tsinghua University, China
Data MiningGraph Neural NetworkEpidemiological Model
A
Ansheng You
Individual Researcher
H
Haibo Wang
Huawei
K
Kui Luo
Huawei
G
Guang Yang
Huawei
Wenqi Shi
Wenqi Shi
Assistant Professor, University of Texas Southwestern Medical Center
AI for HealthcareLLM AgentClinical Decision SupportClinical Informatics
M
Menglong Chen
Huawei
Sicheng Zhang
Sicheng Zhang
Khalifa University
Artificial IntelligenceComputer Vision
Z
Zeshun Lan
Huawei
C
Chunshi Deng
Huawei
H
Huazhong Ji
Huawei
W
Wenjie Liu
Huawei
Y
Yu Huang
Huawei
Y
Yixiang Zhang
Huawei
C
Chenyi Pan
Huawei
J
Jing Wang
Huawei
X
Xin Huang
Huawei
C
Chunsheng Li
Huawei
Jianping Wu
Jianping Wu
Huawei