🤖 AI Summary
This work addresses the vulnerability of cloud robotics to network latency and jitter during high-frequency operations, which often leads to command starvation and unsafe execution. To mitigate this, the authors propose a cloud-edge协同 framework that decouples execution frequency from network round-trip time by precomputing future motion trajectories using a world model in the cloud and streaming them into an edge-side buffer. The approach introduces an ε-tube verifier to rigorously bound motion errors and incorporates an adaptive horizon scaling mechanism that dynamically adjusts the prefetch depth to balance safety and efficiency. Experimental results demonstrate that, under high-latency conditions, the method reduces idle time by over 60% and discards approximately 60% fewer predicted trajectories compared to a static caching baseline, significantly enhancing both real-time performance and robustness of robotic control.
📝 Abstract
Cloud robotics enables robots to offload high-dimensional motion planning and reasoning to remote servers. However, for continuous manipulation tasks requiring high-frequency control, network latency and jitter can severely destabilize the system, causing command starvation and unsafe physical execution.
To address this, we propose Speculative Policy Orchestration (SPO), a latency-resilient cloud-edge framework. SPO utilizes a cloud-hosted world model to pre-compute and stream future kinematic waypoints to a local edge buffer, decoupling execution frequency from network round-trip time. To mitigate unsafe execution caused by predictive drift, the edge node employs an $ε$-tube verifier that strictly bounds kinematic execution errors. The framework is coupled with an Adaptive Horizon Scaling mechanism that dynamically expands or shrinks the speculative pre-fetch depth based on real-time tracking error.
We evaluate SPO on continuous RLBench manipulation tasks under emulated network delays. Results show that even when deployed with learned models of modest accuracy, SPO reduces network-induced idle time by over 60% compared to blocking remote inference. Furthermore, SPO discards approximately 60% fewer cloud predictions than static caching baselines. Ultimately, SPO enables fluid, real-time cloud-robotic control while maintaining bounded physical safety.