🤖 AI Summary
This work addresses the fundamental conflict between the high latency of semantic reasoning and the high-frequency control demands in deploying vision-language-action (VLA) models on edge devices. To resolve this, the authors propose Agile-VLA, a hierarchical asynchronous dual-stream architecture that decouples low-frequency perception (10 Hz) from high-frequency control (50 Hz). The approach introduces an implicit functional anchoring mechanism that directly maps geometric visual cues—such as centroids and edge keypoints—to parameterized action primitives, thereby eliminating reliance on high-latency semantic reasoning within the control loop. Evaluated on an NVIDIA Jetson Orin Nano platform with a 6-DoF robotic arm, Agile-VLA achieves few-shot pose correction of complex irregular objects using only five demonstrations, effectively mitigating the frequency mismatch problem in edge robotics.
📝 Abstract
Deploying Vision-Language-Action (VLA) models on resource-constrained edge platforms encounters a fundamental conflict between high-latency semantic inference and the high-frequency control required for dynamic manipulation. To address the challenge, this paper presents Agile-VLA, a hierarchical framework designed for industrial pose reorientation tasks on edge devices such as the NVIDIA Jetson Orin Nano. The core innovation is an Implicit Affordance Anchoring mechanism that directly maps geometric visual cues, specifically centroid and rim keypoint anchors, into structured parametric action primitives, thereby substantially reducing reliance on high-latency semantic inference during closed-loop control. By decoupling perception (10 Hz) from control (50 Hz) via an asynchronous dual-stream architecture, the system effectively mitigates the frequency mismatch inherent in edge-based robot learning. Experimental results on a standard 6-DoF manipulator demonstrate that Agile-VLA achieves robust rectification of complex, irregular workpieces using only 5-shot demonstrations through extrinsic dexterity.