🤖 AI Summary
To address unstable policy execution in Vision-Language-Action (VLA) models for robotic manipulation—caused by heterogeneous quality of human demonstration data—this paper proposes a decoupled, training-free real-time intervention framework. Our approach comprises three key contributions: (1) introducing LIBERO-Elegant, the first benchmark explicitly evaluating manipulation elegance; (2) formulating implicit task constraint modeling to decouple aesthetic action quality from task success; and (3) training an elegance evaluator via offline-calibrated Q-learning, enabling dynamic assessment and refinement of actions at critical decision points. Experiments demonstrate substantial improvements in execution quality across both simulation and real-world settings, with strong generalization to unseen tasks. The framework shifts control granularity from binary “task completion” to fine-grained “elegant completion,” advancing the fidelity and robustness of VLA-based robotic manipulation.
📝 Abstract
Vision-Language-Action (VLA) models have enabled notable progress in general-purpose robotic manipulation, yet their learned policies often exhibit variable execution quality. We attribute this variability to the mixed-quality nature of human demonstrations, where the implicit principles that govern how actions should be carried out are only partially satisfied. To address this challenge, we introduce the LIBERO-Elegant benchmark with explicit criteria for evaluating execution quality. Using these criteria, we develop a decoupled refinement framework that improves execution quality without modifying or retraining the base VLA policy. We formalize Elegant Execution as the satisfaction of Implicit Task Constraints (ITCs) and train an Elegance Critic via offline Calibrated Q-Learning to estimate the expected quality of candidate actions. At inference time, a Just-in-Time Intervention (JITI) mechanism monitors critic confidence and intervenes only at decision-critical moments, providing selective, on-demand refinement. Experiments on LIBERO-Elegant and real-world manipulation tasks show that the learned Elegance Critic substantially improves execution quality, even on unseen tasks. The proposed model enables robotic control that values not only whether tasks succeed, but also how they are performed.