🤖 AI Summary
VLA models suffer from high inference latency (only 3–5 Hz) on edge devices due to memory bottlenecks induced by autoregressive decoding—far below the 20–30 Hz required for real-time robotic control. To address this, we propose the first system-level acceleration framework tailored for real-time VLA inference on edge platforms. Our approach introduces cross-request pipelined scheduling, reformulating VLA decoding as a macro-pipeline; pioneers cross-request state-packed forward operators and a unified KV circular cache to overcome GPU memory constraints; and synergistically optimizes the heterogeneous prefill and decode phases via micro-batching. Evaluated on OpenVLA-7B, our framework achieves a 2.55× FPS improvement with zero retraining, enabling—for the first time on edge hardware—sustained >20 Hz dynamic operation for real-time robotic control.
📝 Abstract
Vision-Language-Action (VLA) models have emerged as a unified paradigm for robotic perception and control, enabling emergent generalization and long-horizon task execution. However, their deployment in dynamic, real-world environments is severely hin dered by high inference latency. While smooth robotic interaction requires control frequencies of 20 to 30 Hz, current VLA models typi cally operate at only 3-5 Hz on edge devices due to the memory bound nature of autoregressive decoding. Existing optimizations often require extensive retraining or compromise model accuracy. To bridge this gap, we introduce ActionFlow, a system-level inference framework tailored for resource-constrained edge plat forms. At the core of ActionFlow is a Cross-Request Pipelin ing strategy, a novel scheduler that redefines VLA inference as a macro-pipeline of micro-requests. The strategy intelligently batches memory-bound Decode phases with compute-bound Prefill phases across continuous time steps to maximize hardware utilization. Furthermore, to support this scheduling, we propose a Cross Request State Packed Forward operator and a Unified KV Ring Buffer, which fuse fragmented memory operations into efficient dense computations. Experimental results demonstrate that ActionFlow achieves a 2.55x improvement in FPS on the OpenVLA-7B model without retraining, enabling real-time dy namic manipulation on edge hardware. Our work is available at https://anonymous.4open.science/r/ActionFlow-1D47.