🤖 AI Summary
Humanoid robots frequently lose stability and fall when operating on dynamic or deformable terrain due to perceptual degradation—including visual noise, sensor failures, and simulation-to-reality discrepancies. To address this, we propose a vision–blind collaborative control framework that dynamically switches between two control strategies based on online perception confidence estimation: forward-looking vision-based planning is activated when perception is reliable; otherwise, the system seamlessly transitions to a purely proprioceptive strategy. Our contributions are threefold: (1) the first perception-confidence-driven adaptive gating fusion mechanism; (2) the first robust composite control integrating vision- and proprioception-based policies on high-degree-of-freedom humanoid robots; and (3) dual-policy networks trained via reinforcement learning with domain randomization in simulation. Experiments in complex environments—featuring dynamic obstacles, deformable ground, and severe visual noise—demonstrate a 47% improvement in task success rate and an 82% reduction in falling incidents.
📝 Abstract
The performance of legged locomotion is closely tied to the accuracy and comprehensiveness of state observations. Blind policies, which rely solely on proprioception, are considered highly robust due to the reliability of proprioceptive observations. However, these policies significantly limit locomotion speed and often require collisions with the terrain to adapt. In contrast, Vision policies allows the robot to plan motions in advance and respond proactively to unstructured terrains with an online perception module. However, perception is often compromised by noisy real-world environments, potential sensor failures, and the limitations of current simulations in presenting dynamic or deformable terrains. Humanoid robots, with high degrees of freedom and inherently unstable morphology, are particularly susceptible to misguidance from deficient perception, which can result in falls or termination on challenging dynamic terrains. To leverage the advantages of both vision and blind policies, we propose VB-Com, a composite framework that enables humanoid robots to determine when to rely on the vision policy and when to switch to the blind policy under perceptual deficiency. We demonstrate that VB-Com effectively enables humanoid robots to traverse challenging terrains and obstacles despite perception deficiencies caused by dynamic terrains or perceptual noise.