DistRL: An Asynchronous Distributed Reinforcement Learning Framework for On-Device Control Agents

📅 2024-10-18
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the core challenges of data scarcity and training inefficiency in online reinforcement learning (RL) fine-tuning of multimodal large language models (MLLMs) for mobile-device control agents. We propose the first asynchronous distributed RL framework, featuring centralized policy updates coupled with decentralized on-device data collection. To enhance sample efficiency and convergence stability, we introduce a customized prioritized experience replay mechanism and an exploration-exploitation balancing algorithm. Experimental results demonstrate that our method achieves a 3× improvement in training efficiency and accelerates on-device data collection by 2.4×. On the Android open benchmark, it attains a 20% relative increase in task success rate compared to state-of-the-art online fine-tuning approaches, establishing new performance benchmarks for MLLM-based mobile control.

Technology Category

Application Category

📝 Abstract
On-device control agents, especially on mobile devices, are responsible for operating mobile devices to fulfill users' requests, enabling seamless and intuitive interactions. Integrating Multimodal Large Language Models (MLLMs) into these agents enhances their ability to understand and execute complex commands, thereby improving user experience. However, fine-tuning MLLMs for on-device control presents significant challenges due to limited data availability and inefficient online training processes. This paper introduces DistRL, a novel framework designed to enhance the efficiency of online RL fine-tuning for mobile device control agents. DistRL employs centralized training and decentralized data acquisition to ensure efficient fine-tuning in the context of dynamic online interactions. Additionally, the framework is backed by our tailor-made RL algorithm, which effectively balances exploration with the prioritized utilization of collected data to ensure stable and robust training. Our experiments show that, on average, DistRL delivers a 3X improvement in training efficiency and enables training data collection 2.4X faster than the leading synchronous multi-machine methods. Notably, after training, DistRL achieves a 20% relative improvement in success rate compared to state-of-the-art methods on general Android tasks from an open benchmark, significantly outperforming existing approaches while maintaining the same training time. These results validate DistRL as a scalable and efficient solution, offering substantial improvements in both training efficiency and agent performance for real-world, in-the-wild device control tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhance on-device control agent efficiency.
Optimize MLLM fine-tuning with limited data.
Improve training speed and success rates.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asynchronous distributed reinforcement learning
Centralized training decentralized data
Tailor-made RL algorithm
🔎 Similar Papers
No similar papers found.