🤖 AI Summary
To address safety and naturalness challenges in dense human-robot coexistence scenarios, this paper proposes a “future-aware” navigation paradigm that enables proactive collision avoidance by explicitly predicting multi-agent human trajectories and incorporating a scene-adaptive future collision penalty. Methodologically, we introduce SocialNav—the first photorealistic indoor social navigation benchmark—comprising Social-HM3D and Social-MP3D, supporting dynamic human modeling and variable spatial-density evaluation. We further design Falcon, a reinforcement learning framework that jointly models multi-agent trajectory prediction and scene-aware decision-making. Evaluated on SocialNav, Falcon achieves a task success rate of 55% and a personal space compliance rate of 90%, significantly outperforming state-of-the-art learning-based and classical rule-based methods. To foster reproducibility and community advancement, we open-source the code, datasets, and demonstration videos.
📝 Abstract
To navigate safely and efficiently in crowded spaces, robots should not only perceive the current state of the environment but also anticipate future human movements. In this paper, we propose a reinforcement learning architecture, namely Falcon, to tackle socially-aware navigation by explicitly predicting human trajectories and penalizing actions that block future human paths. To facilitate realistic evaluation, we introduce a novel SocialNav benchmark containing two new datasets, Social-HM3D and Social-MP3D. This benchmark offers large-scale photo-realistic indoor scenes populated with a reasonable amount of human agents based on scene area size, incorporating natural human movements and trajectory patterns. We conduct a detailed experimental analysis with the state-of-the-art learning-based method and two classic rule-based path-planning algorithms on the new benchmark. The results demonstrate the importance of future prediction and our method achieves the best task success rate of 55% while maintaining about 90% personal space compliance. We will release our code and datasets. Videos of demonstrations can be viewed at https://zeying-gong.github.io/projects/falcon/ .