🤖 AI Summary
Existing embodied manipulation policies often suffer from inefficiency due to mimicking the temporal rhythm of human demonstrations, while current acceleration methods typically require policy retraining or costly online interactions, limiting their scalability. This work proposes Speedup Patch (SuP), a lightweight, policy-agnostic, plug-and-play acceleration framework that leverages only offline data to adaptively downsample redundant action segments via an external scheduler. SuP achieves, for the first time, general-purpose acceleration in a purely offline setting without policy retraining. It introduces a novel safety proxy based on state deviations predicted by a world model and formulates scheduler optimization as a constrained Markov decision process (CMDP). Experiments demonstrate that SuP achieves an average speedup of 1.8× across Libero and Bigym simulation benchmarks as well as real-world tasks, while preserving the original task success rate.
📝 Abstract
While current embodied policies exhibit remarkable manipulation skills, their execution remains unsatisfactorily slow as they inherit the tardy pacing of human demonstrations. Existing acceleration methods typically require policy retraining or costly online interactions, limiting their scalability for large-scale foundation models. In this paper, we propose Speedup Patch (SuP), a lightweight, policy-agnostic framework that enables plug-and-play acceleration using solely offline data. SuP introduces an external scheduler that adaptively downsamples action chunks provided by embodied policies to eliminate redundancies. Specifically, we formalize the optimization of our scheduler as a Constrained Markov Decision Process (CMDP) aimed at maximizing efficiency without compromising task performance. Since direct success evaluation is infeasible in offline settings, SuP introduces World Model based state deviation as a surrogate metric to enforce safety constraints. By leveraging a learned world model as a virtual evaluator to predict counterfactual trajectories, the scheduler can be optimized via offline reinforcement learning. Empirical results on simulation benchmarks (Libero, Bigym) and real-world tasks validate that SuP achieves an overall 1.8x execution speedup for diverse policies while maintaining their original success rates.