🤖 AI Summary
This work addresses the challenge that human instructions, often ambiguous or underspecified, hinder robots from generating physically feasible and collaboratively effective behaviors. To this end, we propose a replanning framework for human-robot collaboration that integrates a vision-language model with a dual semantic-physical correction mechanism. The framework verifies task logical consistency and physical feasibility prior to execution and detects and rectifies failed actions afterward, enabling robust responses to ambiguous instructions and interactive replanning. By jointly constraining semantic interpretation and physical execution, our approach effectively mitigates hallucinations inherent in vision-language models and enhances failure prediction and recovery capabilities. Extensive experiments—conducted in both simulation and on an upper-body humanoid robot performing tasks such as assembly and tool preparation—demonstrate that our framework significantly outperforms non-corrective baselines while maintaining practicality and effectiveness.
📝 Abstract
Human-Robot Collaboration (HRC) plays an important role in assembly tasks by enabling robots to plan and adjust their motions based on interactive, real-time human instructions. However, such instructions are often linguistically ambiguous and underspecified, making it difficult to generate physically feasible and cooperative robot behaviors. To address this challenge, many studies have applied Vision-Language Models (VLMs) to interpret high-level instructions and generate corresponding actions. Nevertheless, VLM-based approaches still suffer from hallucinated reasoning and an inability to anticipate physical execution failures. To address these challenges, we propose an HRC framework that augments a VLM-based reasoning with a dual-correction mechanism: an internal correction model that verifies logical consistency and task feasibility prior to action execution, and an external correction model that detects and rectifies physical failures through post-execution feedback. Simulation ablation studies demonstrate that the proposed method improves the success rate compared to baselines without correction models. Our real-world experiments in collaborative assembly tasks supported by object fixation or tool preparation by an upper body humanoid robot further confirm the framewor's effectiveness in enabling interactive replanning across different collaborative tasks in response to human instructions, validating its practical feasibility.