EmoBipedNav: Emotion-aware Social Navigation for Bipedal Robots with Deep Reinforcement Learning

๐Ÿ“… 2025-03-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses safe navigation of bipedal robots in dynamic social environments. We propose a two-stage deep reinforcement learning (DRL) framework integrating pedestrian emotion perception and gait dynamics constraints. Our key contributions are: (1) the first quantitative modeling of emotion-induced โ€œdiscomfort zonesโ€ as formal navigation constraints; and (2) a hybrid action planning and training mechanism that combines a simplified motion model for real-time control with full-order dynamical embedding to enhance physical feasibility. The method employs sequential LiDAR grid maps, emotion-driven discomfort zone extraction, and a ROM-parameterized policy network. Extensive simulations and real-robot experiments demonstrate significant improvements over model predictive control (MPC) and state-of-the-art DRL baselines in social safety, interaction naturalness, and locomotion stability. Code and real-robot demonstration videos are publicly available.

Technology Category

Application Category

๐Ÿ“ Abstract
This study presents an emotion-aware navigation framework -- EmoBipedNav -- using deep reinforcement learning (DRL) for bipedal robots walking in socially interactive environments. The inherent locomotion constraints of bipedal robots challenge their safe maneuvering capabilities in dynamic environments. When combined with the intricacies of social environments, including pedestrian interactions and social cues, such as emotions, these challenges become even more pronounced. To address these coupled problems, we propose a two-stage pipeline that considers both bipedal locomotion constraints and complex social environments. Specifically, social navigation scenarios are represented using sequential LiDAR grid maps (LGMs), from which we extract latent features, including collision regions, emotion-related discomfort zones, social interactions, and the spatio-temporal dynamics of evolving environments. The extracted features are directly mapped to the actions of reduced-order models (ROMs) through a DRL architecture. Furthermore, the proposed framework incorporates full-order dynamics and locomotion constraints during training, effectively accounting for tracking errors and restrictions of the locomotion controller while planning the trajectory with ROMs. Comprehensive experiments demonstrate that our approach exceeds both model-based planners and DRL-based baselines. The hardware videos and open-source code are available at https://gatech-lidar.github.io/emobipednav.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Addresses bipedal robot navigation in social environments
Integrates emotion-aware navigation with deep reinforcement learning
Overcomes locomotion constraints and social interaction complexities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Emotion-aware navigation using deep reinforcement learning
Two-stage pipeline for bipedal and social constraints
Sequential LiDAR grid maps for feature extraction
๐Ÿ”Ž Similar Papers
No similar papers found.
W
Wei Zhu
Laboratory for Intelligent Decision and Autonomous Robots, Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA
A
Abirath Raju
Laboratory for Intelligent Decision and Autonomous Robots, Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA
Abdulaziz Shamsah
Abdulaziz Shamsah
Kuwait University
LocomotionRoboticsSafety
Anqi Wu
Anqi Wu
Assistant Professor, Computational Science and Engineering, Georgia Tech
machine learningcomputational and statistical neuroscience
S
Seth Hutchinson
Khoury College of Computer Sciences, Northeastern University, Boston, MA 02115, USA
Y
Ye Zhao
Laboratory for Intelligent Decision and Autonomous Robots, Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA