🤖 AI Summary
Real-world online policy learning faces significant challenges, including limited scalability of data collection, complex deployment across heterogeneous robots, and low training efficiency over long time horizons. This work proposes USER, a novel system that treats physical robots as first-class hardware resources on par with GPUs. By introducing unified hardware abstraction, an adaptive communication plane, and a persistent cache-aware buffering mechanism, USER establishes a fully asynchronous, cloud-edge collaborative framework for sustained online learning. The system enables efficient co-training of multimodal policies and large models within a unified pipeline, supporting automatic scheduling of heterogeneous robots, distributed data stream processing, and streaming weight synchronization. Experiments demonstrate that USER achieves exceptional scalability, robustness, and stable long-term learning performance in both simulation and real-world environments.
📝 Abstract
Online policy learning directly in the physical world is a promising yet challenging direction for embodied intelligence. Unlike simulation, real-world systems cannot be arbitrarily accelerated, cheaply reset, or massively replicated, which makes scalable data collection, heterogeneous deployment, and long-horizon effective training difficult. These challenges suggest that real-world policy learning is not only an algorithmic issue but fundamentally a systems problem. We present USER, a Unified and extensible SystEm for Real-world online policy learning. USER treats physical robots as first-class hardware resources alongside GPUs through a unified hardware abstraction layer, enabling automatic discovery, management, and scheduling of heterogeneous robots. To address cloud-edge communication, USER introduces an adaptive communication plane with tunneling-based networking, distributed data channels for traffic localization, and streaming-multiprocessor-aware weight synchronization to regulate GPU-side overhead. On top of this infrastructure, USER organizes learning as a fully asynchronous framework with a persistent, cache-aware buffer, enabling efficient long-horizon experiments with robust crash recovery and reuse of historical data. In addition, USER provides extensible abstractions for rewards, algorithms, and policies, supporting online imitation or reinforcement learning of CNN/MLP, generative policies, and large vision-language-action (VLA) models within a unified pipeline. Results in both simulation and the real world show that USER enables multi-robot coordination, heterogeneous manipulators, edge-cloud collaboration with large models, and long-running asynchronous training, offering a unified and extensible systems foundation for real-world online policy learning.