🤖 AI Summary
Existing roadside perception systems are limited to instance-level detection and lack natural language interaction capabilities and context-driven traffic behavior reasoning. To address this, we propose RoadSceneVQA—the first large-scale visual question answering (VQA) dataset specifically designed for roadside scenes, covering fine-grained cognitive tasks including object attributes, intent, legality, and interactive behaviors. Methodologically, we introduce the CogniAnchor Fusion module for multimodal feature alignment and a decoupled, auxiliary reasoning mechanism—AD-CoT (Attention-Driven Chain-of-Thought)—that integrates traffic-rule grounding and situation-aware commonsense inference to jointly support explicit recognition and implicit reasoning. Evaluated on RoadSceneVQA and the CODA-LM benchmark, our approach achieves significant improvements in both reasoning accuracy and computational efficiency, demonstrating strong effectiveness and generalizability for structured traffic cognition tasks.
📝 Abstract
Current roadside perception systems mainly focus on instance-level perception, which fall short in enabling interaction via natural language and reasoning about traffic behaviors in context. To bridge this gap, we introduce RoadSceneVQA, a large-scale and richly annotated visual question answering (VQA) dataset specifically tailored for roadside scenarios. The dataset comprises 34,736 diverse QA pairs collected under varying weather, illumination, and traffic conditions, targeting not only object attributes but also the intent, legality, and interaction patterns of traffic participants. RoadSceneVQA challenges models to perform both explicit recognition and implicit commonsense reasoning, grounded in real-world traffic rules and contextual dependencies. To fully exploit the reasoning potential of Multi-modal Large Language Models (MLLMs), we further propose CogniAnchor Fusion (CAF), a vision-language fusion module inspired by human-like scene anchoring mechanisms. Moreover, we propose the Assisted Decoupled Chain-of-Thought (AD-CoT) to enhance the reasoned thinking via CoT prompting and multi-task learning. Based on the above, we propose the baseline model RoadMind. Experiments on RoadSceneVQA and CODA-LM benchmark show that the pipeline consistently improves both reasoning accuracy and computational efficiency, allowing the MLLM to achieve state-of-the-art performance in structural traffic perception and reasoning tasks.