๐ค AI Summary
Existing embodied navigation models largely neglect human cognitive processes under partial observability. This work proposes EgoCogNav, a multimodal framework thatโ for the first timeโmodels perceptual uncertainty as a latent state, explicitly linking it to cognitively grounded behaviors such as pausing, hesitating, and backtracking. By jointly encoding egocentric visual features with multi-source sensory cues, the model co-predicts navigation trajectories and head motion. To support this, we introduce and release CEN, the first real-world dataset (6 hours) annotated with fine-grained cognitive behaviors. Experiments demonstrate that EgoCogNav accurately captures uncertainty dynamics strongly correlated with human behavior and exhibits strong zero-shot generalization to unseen environments. This work establishes a novel, cognition-aware paradigm for embodied navigation modeling.
๐ Abstract
Modeling the cognitive and experiential factors of human navigation is central to deepening our understanding of human-environment interaction and to enabling safe social navigation and effective assistive wayfinding. Most existing methods focus on forecasting motions in fully observed scenes and often neglect human factors that capture how people feel and respond to space. To address this gap, We propose EgoCogNav, a multimodal egocentric navigation framework that predicts perceived path uncertainty as a latent state and jointly forecasts trajectories and head motion by fusing scene features with sensory cues. To facilitate research in the field, we introduce the Cognition-aware Egocentric Navigation (CEN) dataset consisting 6 hours of real-world egocentric recordings capturing diverse navigation behaviors in real-world scenarios. Experiments show that EgoCogNav learns the perceived uncertainty that highly correlates with human-like behaviors such as scanning, hesitation, and backtracking while generalizing to unseen environments.