Deductive Chain-of-Thought Augmented Socially-aware Robot Navigation World Model

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address physical misalignment and logical inconsistency in LLM-driven navigation within dynamic human environments, this work proposes a chain-of-reasoning framework integrating spatiotemporal world modeling with first-order logic (FOL) guidance. Methodologically, social norms are formalized as verifiable FOL rules; a structured world model is constructed through joint multi-agent state tracking and spatiotemporal modeling; and LLM path planning is constrained via deductive reasoning chains grounded in these rules. The key contribution is the first unified modeling of logical interpretability, physical safety, and social compliance in autonomous navigation. Experiments in dense pedestrian environments demonstrate significant improvements: +23.6% navigation success rate and −41.2% social norm violation rate, validating the effectiveness and reliability of synergistic formal reasoning and LLMs for socially aware robotic navigation.

Technology Category

Application Category

📝 Abstract
Social robot navigation increasingly relies on large language models for reasoning, path planning, and enabling movement in dynamic human spaces. However, relying solely on LLMs for planning often leads to unpredictable and unsafe behaviors, especially in dynamic human spaces, due to limited physical grounding and weak logical consistency. In this work, we introduce NaviWM, a socially-aware robot Navigation World Model that augments LLM reasoning with a structured world model and a logic-driven chain-of-thought process. NaviWM consists of two main components: (1) a spatial-temporal world model that captures the positions, velocities, and activities of agents in the environment, and (2) a deductive reasoning module that guides LLMs through a multi-step, logic-based inference process. This integration enables the robot to generate navigation decisions that are both socially compliant and physically safe, under well-defined constraints such as personal space, collision avoidance, and timing. Unlike previous methods based on prompting or fine-tuning, NaviWM encodes social norms as first-order logic, enabling interpretable and verifiable reasoning. Experiments show that NaviWM improves success rates and reduces social violations, particularly in crowded environments. These results demonstrate the benefit of combining formal reasoning with LLMs for robust social navigation. Additional experimental details and demo videos for this work can be found at: https://sites.google.com/view/NaviWM.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robot navigation safety in dynamic human environments
Addressing unpredictable LLM behaviors through logical reasoning
Integrating social norms with physical constraints for navigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augments LLM reasoning with structured world model
Uses deductive reasoning module for logic-based inference
Encodes social norms as first-order logic rules
🔎 Similar Papers
No similar papers found.