Online Context Learning for Socially-compliant Navigation

📅 2024-06-17
🏛️ arXiv.org
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
In social robot navigation, the dynamic and context-dependent nature of human behavior and environmental conditions poses significant challenges for modeling, resulting in poor cross-scenario generalization of conventional learning-based methods and difficulty in maintaining long-term social compliance during deployment. To address this, we propose a two-tier online contextual learning framework: a lower tier employs deep reinforcement learning to generate foundational navigation policies, while an upper tier jointly leverages online meta-learning and behavioral cloning to enable real-time, adaptive refinement of social norms—supporting zero-shot environmental transfer. Our approach integrates the Social Force Model with community-scale simulation-based evaluation (AI2THOR/Sim2Real). In the most challenging scenarios, it achieves an 8% improvement over state-of-the-art methods, significantly enhancing passage success rate, interpersonal distance maintenance, and interaction naturalness. All code, datasets, and pre-trained models are publicly released.

Technology Category

Application Category

📝 Abstract
Robot social navigation needs to adapt to different human factors and environmental contexts. However, since these factors and contexts are difficult to predict and cannot be exhaustively enumerated, traditional learning-based methods have difficulty in ensuring the social attributes of robots in long-term and cross-environment deployments. This letter introduces an online context learning method that aims to empower robots to adapt to new social environments online. The proposed method adopts a two-layer structure. The bottom layer is built using a deep reinforcement learning-based method to ensure the output of basic robot navigation commands. The upper layer is implemented using an online robot learning-based method to socialize the control commands suggested by the bottom layer. Experiments using a community-wide simulator show that our method outperforms the state-of-the-art ones. Experimental results in the most challenging scenarios show that our method improves the performance of the state-of-the-art by 8%. The source code of the proposed method, the data used, and the tools for the per-training step are publicly available at https://github.com/Nedzhaken/SOCSARL-OL.
Problem

Research questions and friction points this paper is trying to address.

Adapting robot navigation to unpredictable human and environmental factors.
Ensuring social compliance in long-term, cross-environment robot deployments.
Improving robot navigation performance in challenging social scenarios.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online context learning for robot navigation
Two-layer deep reinforcement learning structure
Publicly available source code and tools
🔎 Similar Papers
No similar papers found.
Iaroslav Okunevich
Iaroslav Okunevich
UTBM, CIAD UMR 7533, F-90010 Belfort, France
A
Alexandre Lombard
UTBM, CIAD UMR 7533, F-90010 Belfort, France
T
T. Krajník
Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czechia
Y
Yassine Ruichek
UTBM, CIAD UMR 7533, F-90010 Belfort, France
Zhi Yan
Zhi Yan
Teacher-researcher @ ENSTA - Institut Polytechnique de Paris
Mobile RoboticsChronorobotics