RoadSceneVQA: Benchmarking Visual Question Answering in Roadside Perception Systems for Intelligent Transportation System

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing roadside perception systems are limited to instance-level detection and lack natural language interaction capabilities and context-driven traffic behavior reasoning. To address this, we propose RoadSceneVQA—the first large-scale visual question answering (VQA) dataset specifically designed for roadside scenes, covering fine-grained cognitive tasks including object attributes, intent, legality, and interactive behaviors. Methodologically, we introduce the CogniAnchor Fusion module for multimodal feature alignment and a decoupled, auxiliary reasoning mechanism—AD-CoT (Attention-Driven Chain-of-Thought)—that integrates traffic-rule grounding and situation-aware commonsense inference to jointly support explicit recognition and implicit reasoning. Evaluated on RoadSceneVQA and the CODA-LM benchmark, our approach achieves significant improvements in both reasoning accuracy and computational efficiency, demonstrating strong effectiveness and generalizability for structured traffic cognition tasks.

Technology Category

Application Category

📝 Abstract
Current roadside perception systems mainly focus on instance-level perception, which fall short in enabling interaction via natural language and reasoning about traffic behaviors in context. To bridge this gap, we introduce RoadSceneVQA, a large-scale and richly annotated visual question answering (VQA) dataset specifically tailored for roadside scenarios. The dataset comprises 34,736 diverse QA pairs collected under varying weather, illumination, and traffic conditions, targeting not only object attributes but also the intent, legality, and interaction patterns of traffic participants. RoadSceneVQA challenges models to perform both explicit recognition and implicit commonsense reasoning, grounded in real-world traffic rules and contextual dependencies. To fully exploit the reasoning potential of Multi-modal Large Language Models (MLLMs), we further propose CogniAnchor Fusion (CAF), a vision-language fusion module inspired by human-like scene anchoring mechanisms. Moreover, we propose the Assisted Decoupled Chain-of-Thought (AD-CoT) to enhance the reasoned thinking via CoT prompting and multi-task learning. Based on the above, we propose the baseline model RoadMind. Experiments on RoadSceneVQA and CODA-LM benchmark show that the pipeline consistently improves both reasoning accuracy and computational efficiency, allowing the MLLM to achieve state-of-the-art performance in structural traffic perception and reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

Enabling natural language interaction and reasoning in roadside perception systems
Addressing limitations of instance-level perception in traffic behavior understanding
Bridging the gap between visual recognition and contextual traffic reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed CogniAnchor Fusion for vision-language anchoring
Introduced Assisted Decoupled Chain-of-Thought reasoning
Developed RoadMind baseline model for traffic perception
🔎 Similar Papers
Runwei Guan
Runwei Guan
Hong Kong University of Science and Technology (Guangzhou) / Founder of FertiTech AI
Multi-Modal LearningUnmanned Surface VesselRadar PerceptionAI Medicine
R
Rongsheng Hu
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
S
Shangshu Chen
National Research Center of Cultural Industries, Central China Normal University, Wuhan, China
N
Ningyuan Xiao
Information Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Xue Xia
Xue Xia
Pinterest
Jiayang Liu
Jiayang Liu
University of Science and Technology of China
Adversarial exampleAI security
B
Beibei Chen
School of Marxism, Jilin University, Changchun, China
Z
Ziren Tang
Information Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
N
Ningwei Ouyang
School of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou, China
S
Shaofeng Liang
Information Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Yuxuan Fan
Yuxuan Fan
Peking University
Natural Language Processing
W
Wanjie Sun
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, China
Y
Yutao Yue
Information Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China