🤖 AI Summary
This work addresses the challenge of balancing safety and dynamic obstacle avoidance in visual navigation within complex real-world environments. To this end, we propose an alignment optimization method grounded in counterfactual human preferences. Our approach uniquely integrates counterfactual trajectory generation with pairwise human preference annotations, enabling fine-tuning of navigation policies through preference aggregation to better align with human intuition regarding safe obstacle avoidance. Evaluated on the SCAND dataset, our method reduces near-collision incidents by 49.7%. When deployed on a real quadrupedal robot, it achieves a 24.4% improvement in goal-reaching success rate and a 45.7% reduction in collisions and human interventions, substantially enhancing both navigation safety and robustness.
📝 Abstract
Visuomotor navigation policies have shown strong perception-action coupling for embodied agents, yet they often struggle with safe navigation and dynamic obstacle avoidance in complex real-world environments. We introduce CHOP, a novel approach that leverages Counterfactual Human Preference Labels to align visuomotor navigation policies towards human intuition of safety and obstacle avoidance in navigation. In CHOP, for each visual observation, the robot's executed trajectory is included among a set of counterfactual navigation trajectories: alternative trajectories the robot could have followed under identical conditions. Human annotators provide pairwise preference labels over these trajectories based on anticipated outcomes such as collision risk and path efficiency. These aggregated preferences are then used to fine-tune visuomotor navigation policies, aligning their behavior with human preferences in navigation. Experiments on the SCAND dataset show that visuomotor navigation policies fine-tuned with CHOP reduce near-collision events by 49.7%, decrease deviation from human-preferred trajectories by 45.0%, and increase average obstacle clearance by 19.8% on average across multiple state-of-the-art models, compared to their pretrained baselines. These improvements transfer to real-world deployments on a Ghost Robotics Vision60 quadruped, where CHOP-aligned policies improve average goal success rates by 24.4%, increase minimum obstacle clearance by 6.8%, reduce collision and intervention events by 45.7%, and improve normalized path completion by 38.6% on average across navigation scenarios, compared to their pretrained baselines. Our results highlight the value of counterfactual preference supervision in bridging the gap between large-scale visuomotor policies and human-aligned, safety-aware embodied navigation.