🤖 AI Summary
To address the limited generalization across tasks, high cost of expert annotation, and insufficient reliability in human-robot collaboration inherent in Vision-Language-Action (VLA) models, this paper proposes the first bidirectional co-adaptive learning framework integrating VLA models with human experts. Our method unifies vision-language-action joint modeling, human-in-the-loop reinforcement learning, online fine-tuning, and real-time brain-computer interface (BCI) feedback to establish a closed-loop “demonstration–execution–feedback–optimization” cycle. With only a small number of expert demonstrations, cross-task operational success rates improve significantly. BCI experiments demonstrate that real-time intervention in low-speed action systems enhances execution efficiency by 23.6%, while simultaneously improving human operator skill acquisition. This work pioneers human-robot skill co-evolution, establishing a novel paradigm for human-AI co-manipulation in the era of foundation models.
📝 Abstract
The emergence of vision-language-action (VLA) models has given rise to foundation models for robot manipulation. Although these models have achieved significant improvements, their generalization in multi-task manipulation remains limited. This study proposes a VLA model-expert collaboration framework that leverages a limited number of expert actions to enhance VLA model performance. This approach reduces expert workload relative to manual operation while simultaneously improving the reliability and generalization of VLA models. Furthermore, manipulation data collected during collaboration can further refine the VLA model, while human participants concurrently enhance their skills. This bi-directional learning loop boosts the overall performance of the collaboration system. Experimental results across various VLA models demonstrate the effectiveness of the proposed system in collaborative manipulation and learning, as evidenced by improved success rates across tasks. Additionally, validation using a brain-computer interface (BCI) indicates that the collaboration system enhances the efficiency of low-speed action systems by involving VLA model during manipulation. These promising results pave the way for advancing human-robot interaction in the era of foundation models for robotics. (Project website: https://aoqunjin.github.io/Expert-VLA/)