Uncertainty-Aware Shared Autonomy System with Hierarchical Conservative Skill Inference

📅 2023-12-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing shared autonomy imitation learning approaches neglect operator cognitive load and the risks of delayed or erroneous interventions. Method: This paper proposes an uncertainty-aware shared autonomy framework that performs hierarchical skill uncertainty inference—jointly modeling environmental and learning uncertainties at an abstract level—to enable conservative task skill inference from human demonstrations and corrective feedback, thereby mitigating fatigue and errors induced by judgment inaccuracies and intervention latency. Contribution/Results: It presents the first system-level shared autonomy design for multi-configuration robots, integrating imitation learning, uncertainty modeling, hierarchical skill representation, and conservative policy reasoning within a closed-loop motion control architecture. Evaluated on dynamic-disturbance scenarios involving pouring and pick-and-place tasks, the framework significantly improves operational stability, enhances interaction robustness, and demonstrates superior cross-scenario generalization capability.
📝 Abstract
Shared autonomy imitation learning, in which robots share workspace with humans for learning, enables correct actions in unvisited states and the effective resolution of compounding errors through expert's corrections. However, it demands continuous human attention and supervision to lead the demonstrations, without considering the risks associated with human judgment errors and delayed interventions. This can potentially lead to high levels of fatigue for the demonstrator and the additional errors. In this work, we propose an uncertainty-aware shared autonomy system that enables the robot to infer conservative task skills considering environmental uncertainties and learning from expert demonstrations and corrections. To enhance generalization and scalability, we introduce a hierarchical structure-based skill uncertainty inference framework operating at more abstract levels. We apply this to robot motion to promote a more stable interaction. Although shared autonomy systems have demonstrated high-level results in recent research and play a critical role, specific system design details have remained elusive. This paper provides a detailed design proposal for a shared autonomy system considering various robot configurations. Furthermore, we experimentally demonstrate the system's capability to learn operational skills, even in dynamic environments with interference, through pouring and pick-and-place tasks. Our code will be released soon.
Problem

Research questions and friction points this paper is trying to address.

Mitigates covariate-shift errors in shared-autonomy imitation learning
Reduces operator cognitive load and intervention risks
Improves task success in dynamic scenes with uncertainty-awareness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical policy for conservative skill inference
Uncertainty-aware behavior modulation in real-time
Open-source VR-teleoperation for multi-configuration manipulators
🔎 Similar Papers
No similar papers found.
T
Taewoo Kim
Social Robotics Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, Republic of Korea
T
Taewoo Kim
Social Robotics Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, Republic of Korea
D
Donghyung Kim
Field Robotics Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, Republic of Korea
Minsu Jang
Minsu Jang
Assistant Project Scientist, University of California, Irvine
Jaehong Kim
Jaehong Kim
Yale University
Environmental EngineeringWater TreatmentEnvironmental NanotechnologyDisinfectionMembrane