🤖 AI Summary
This study addresses the limitations of existing dual-arm teleoperation systems, which often fail to accurately interpret user intent and task structure, resulting in untimely and imprecise assistance that degrades both performance and user experience. To overcome this, we propose SUBTA—a novel framework that integrates learning-based intent estimation, scene graph–driven task planning, and context-aware motion assistance into a unified, structured approach for assembly tasks. This integration enables predictable, high-trust human-robot collaboration. Experimental results demonstrate that SUBTA significantly improves positional accuracy (p<0.001, d=1.18) and orientation accuracy (p<0.001, d=1.75), while substantially reducing mental workload (p=0.002, d=1.34). Users reported that system interventions were more predictable and visual feedback clearer, underscoring the effectiveness of the proposed approach.
📝 Abstract
In human-robot collaboration, shared autonomy enhances human performance through precise, intuitive support. Effective robotic assistance requires accurately inferring human intentions and understanding task structures to determine optimal support timing and methods. In this paper, we present SUBTA, a supported teleoperation system for bimanual assembly that couples learned intention estimation, scene-graph task planning, and context-dependent motion assists. We validate our approach through a user study (N=12) comparing standard teleoperation, motion-support only, and SUBTA. Linear mixed-effects analysis revealed that SUBTA significantly outperformed standard teleoperation in position accuracy (p<0.001, d=1.18) and orientation accuracy (p<0.001, d=1.75), while reducing mental demand (p=0.002, d=1.34). Post-experiment ratings indicate clearer, more trustworthy visual feedback and predictable interventions in SUBTA. The results demonstrate that SUBTA greatly improves both effectiveness and user experience in teleoperation.