CogNav: Cognitive Process Modeling for Object Goal Navigation with LLMs

📅 2024-12-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient cognitive process modeling in Object Navigation (ObjectNav) within unknown environments. Unlike existing approaches relying on implicit learning or predefined rules, we propose a neuroscience-inspired explicit cognitive modeling framework. Our method decomposes human visual search behavior into dynamic cognitive states—exploration, localization, and identification—and implements an LLM-driven finite-state machine that online constructs a heterogeneous cognitive map integrating spatial structure and semantic knowledge, thereby closing the perception-reasoning loop. To our knowledge, this is the first approach enabling interpretable and controllable explicit cognitive process modeling for navigation. Evaluated under the HM3D open-vocabulary zero-shot setting, our method achieves a success rate of 87.2%, substantially surpassing the prior state-of-the-art (69.3%) and demonstrating human-like navigation strategies.

Technology Category

Application Category

📝 Abstract
Object goal navigation (ObjectNav) is a fundamental task of embodied AI that requires the agent to find a target object in unseen environments. This task is particularly challenging as it demands both perceptual and cognitive processes for effective perception and decision-making. While perception has gained significant progress powered by the rapidly developed visual foundation models, the progress on the cognitive side remains limited to either implicitly learning from massive navigation demonstrations or explicitly leveraging pre-defined heuristic rules. Inspired by neuroscientific evidence that humans consistently update their cognitive states while searching for objects in unseen environments, we present CogNav, which attempts to model this cognitive process with the help of large language models. Specifically, we model the cognitive process with a finite state machine composed of cognitive states ranging from exploration to identification. The transitions between the states are determined by a large language model based on an online built heterogeneous cognitive map containing spatial and semantic information of the scene being explored. Extensive experiments on both synthetic and real-world environments demonstrate that our cognitive modeling significantly improves ObjectNav efficiency, with human-like navigation behaviors. In an open-vocabulary and zero-shot setting, our method advances the SOTA of the HM3D benchmark from 69.3% to 87.2%. The code and data will be released.
Problem

Research questions and friction points this paper is trying to address.

Model cognitive processes for object goal navigation.
Improve decision-making in unseen environments using LLMs.
Enhance success rate in ObjectNav tasks significantly.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large language models for cognitive process modeling
Implements finite state machine with fine-grained states
Dynamically constructs heterogeneous cognitive maps for navigation
🔎 Similar Papers
No similar papers found.
Yihan Cao
Yihan Cao
LinkedIn
Jiazhao Zhang
Jiazhao Zhang
Peking University
Embodied AINavigation3D Vision
Z
Zhinan Yu
National University of Defense Technology
S
Shuzhen Liu
National University of Defense Technology
Z
Zheng Qin
Defense Innovation Institute, Academy of Military Sciences
Qin Zou
Qin Zou
Professor of Computer Science, Wuhan University
Computer VisionPattern RecognitionMachine Learning
Bo Du
Bo Du
Department of Management, Griffith Business School
Sustainable TransportTravel BehaviourUrban Data AnalyticsLogistics and Supply Chain
K
Kai Xu
National University of Defense Technology