🤖 AI Summary
This work addresses the challenges of unreliable state estimation and unstable task planning in human-robot collaborative structured assembly, which arise from perceptual noise and human interventions. To tackle these issues, the authors propose a dual-module framework that integrates vision-language models (VLMs) with domain knowledge. The framework employs a perception-to-symbolic state alignment mechanism to map RGB-D observations into symbolic assembly states, which are then validated against design specifications. Furthermore, a minimal-change replanning strategy is introduced to enable stable and efficient task adaptation and multi-robot allocation in response to human actions. Evaluated on a 27-component wooden structure assembly task, the approach achieves a 97% accuracy in state synthesis and demonstrates significantly improved planning stability and task feasibility across diverse collaborative scenarios.
📝 Abstract
Human-robot collaboration (HRC) in structured assembly requires reliable state estimation and adaptive task planning under noisy perception and human interventions. To address these challenges, we introduce a design-grounded human-aware planning framework for human-robot collaborative structured assembly. The framework comprises two coupled modules. Module I, Perception-to-Symbolic State (PSS), employs vision-language models (VLMs) based agents to align RGB-D observations with design specifications and domain knowledge, synthesizing verifiable symbolic assembly states. It outputs validated installed and uninstalled component sets for online state tracking. Module II, Human-Aware Planning and Replanning (HPR), performs task-level multi-robot assignment and updates the plan only when the observed state deviates from the expected execution outcome. It applies a minimal-change replanning rule to selectively revise task assignments and preserve plan stability even under human interventions. We validate the framework on a 27-component timber-frame assembly. The PSS module achieves 97% state synthesis accuracy, and the HPR module maintains feasible task progression across diverse HRC scenarios. Results indicate that integrating VLM-based perception with knowledge-driven planning improves robustness of state estimation and task planning under dynamic conditions.