🤖 AI Summary
To address the reliance on external sensing, complex interaction modeling, and unnatural coordination in human–bipedal-robot collaborative load transportation, this paper proposes COLA: a proprioception-based single-policy reinforcement learning framework that unifies leader and follower behavioral modeling without external sensors or explicit human intention models. COLA implicitly predicts object dynamics and human motion intent through closed-loop training. Its core innovation lies in the first integration of leader–follower roles into a single proprioceptive policy, enabling dynamic load sharing and whole-body compliant coordination. Simulation results show a 24.7% reduction in human effort; physical experiments validate performance across diverse terrains and objects; and user studies demonstrate a 27.4% average improvement over baselines. These findings confirm COLA’s superior effectiveness, generalizability, and robustness in real-world collaborative manipulation tasks.
📝 Abstract
Human-humanoid collaboration shows significant promise for applications in healthcare, domestic assistance, and manufacturing. While compliant robot-human collaboration has been extensively developed for robotic arms, enabling compliant human-humanoid collaboration remains largely unexplored due to humanoids' complex whole-body dynamics. In this paper, we propose a proprioception-only reinforcement learning approach, COLA, that combines leader and follower behaviors within a single policy. The model is trained in a closed-loop environment with dynamic object interactions to predict object motion patterns and human intentions implicitly, enabling compliant collaboration to maintain load balance through coordinated trajectory planning. We evaluate our approach through comprehensive simulator and real-world experiments on collaborative carrying tasks, demonstrating the effectiveness, generalization, and robustness of our model across various terrains and objects. Simulation experiments demonstrate that our model reduces human effort by 24.7%. compared to baseline approaches while maintaining object stability. Real-world experiments validate robust collaborative carrying across different object types (boxes, desks, stretchers, etc.) and movement patterns (straight-line, turning, slope climbing). Human user studies with 23 participants confirm an average improvement of 27.4% compared to baseline models. Our method enables compliant human-humanoid collaborative carrying without requiring external sensors or complex interaction models, offering a practical solution for real-world deployment.