🤖 AI Summary
This work addresses the lack of human-like reasoning in high-level autonomous driving, particularly in long-tail scenarios and complex social interactions. It proposes reasoning as the core of system cognition and introduces a novel cognitive hierarchy framework that systematically defines seven fundamental challenges. The study comprehensively reviews recent advances from both architectural and evaluation perspectives, exploring pathways such as neuro-symbolic architectures, robust reasoning under uncertainty, and modeling implicit social negotiation through the integration of large language models and multimodal foundation models. A critical tension is identified between the computational latency of large models and the real-time demands of vehicle control. Emphasizing deep integration of symbolic reasoning with physical control, the research advocates for “glass-box” interpretable agents and provides a theoretical framework and roadmap toward next-generation autonomous systems endowed with genuine understanding capabilities.
📝 Abstract
The development of high-level autonomous driving (AD) is shifting from perception-centric limitations to a more fundamental bottleneck, namely, a deficit in robust and generalizable reasoning. Although current AD systems manage structured environments, they consistently falter in long-tail scenarios and complex social interactions that require human-like judgment. Meanwhile, the advent of large language and multimodal models (LLMs and MLLMs) presents a transformative opportunity to integrate a powerful cognitive engine into AD systems, moving beyond pattern matching toward genuine comprehension. However, a systematic framework to guide this integration is critically lacking. To bridge this gap, we provide a comprehensive review of this emerging field and argue that reasoning should be elevated from a modular component to the system's cognitive core. Specifically, we first propose a novel Cognitive Hierarchy to decompose the monolithic driving task according to its cognitive and interactive complexity. Building on this, we further derive and systematize seven core reasoning challenges, such as the responsiveness-reasoning trade-off and social-game reasoning. Furthermore, we conduct a dual-perspective review of the state-of-the-art, analyzing both system-centric approaches to architecting intelligent agents and evaluation-centric practices for their validation. Our analysis reveals a clear trend toward holistic and interpretable "glass-box" agents. In conclusion, we identify a fundamental and unresolved tension between the high-latency, deliberative nature of LLM-based reasoning and the millisecond-scale, safety-critical demands of vehicle control. For future work, a primary objective is to bridge the symbolic-to-physical gap by developing verifiable neuro-symbolic architectures, robust reasoning under uncertainty, and scalable models for implicit social negotiation.