PaveBench: A Versatile Benchmark for Pavement Distress Perception and Interactive Vision-Language Analysis

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing pavement distress assessment methods, which are predominantly confined to single-modality visual tasks and lack capabilities in multi-turn interaction, quantitative analysis, and fact-based reasoning—hindering their utility in real-world maintenance decision-making. To bridge this gap, we propose PaveBench, a unified multimodal benchmark supporting classification, detection, segmentation, and visual question answering (VQA). We introduce PaveVQA, a large-scale dataset of real-world highway images, and pioneer expert-refined multi-turn VQA for pavement distress analysis. Our framework integrates domain-specific tools via agent-augmented VQA and incorporates a hard distractor subset to enhance model robustness. Through standardized task definitions and evaluation protocols, we systematically benchmark state-of-the-art approaches and release the dataset and code to advance research in intelligent infrastructure inspection and multimodal interaction.
📝 Abstract
Pavement condition assessment is essential for road safety and maintenance. Existing research has made significant progress. However, most studies focus on conventional computer vision tasks such as classification, detection, and segmentation. In real-world applications, pavement inspection requires more than visual recognition. It also requires quantitative analysis, explanation, and interactive decision support. Current datasets are limited. They focus on unimodal perception. They lack support for multi-turn interaction and fact-grounded reasoning. They also do not connect perception with vision-language analysis. To address these limitations, we introduce PaveBench, a large-scale benchmark for pavement distress perception and interactive vision-language analysis on real-world highway inspection images. PaveBench supports four core tasks: classification, object detection, semantic segmentation, and vision-language question answering. It provides unified task definitions and evaluation protocols. On the visual side, PaveBench provides large-scale annotations and includes a curated hard-distractor subset for robustness evaluation. It contains a large collection of real-world pavement images. On the multimodal side, we introduce PaveVQA, a real-image question answering (QA) dataset that supports single-turn, multi-turn, and expert-corrected interactions. It covers recognition, localization, quantitative estimation, and maintenance reasoning. We evaluate several state-of-the-art methods and provide a detailed analysis. We also present a simple and effective agent-augmented visual question answering framework that integrates domain-specific models as tools alongside vision-language models. The dataset is available at: https://huggingface.co/datasets/MML-Group/PaveBench.
Problem

Research questions and friction points this paper is trying to address.

pavement distress
vision-language analysis
interactive reasoning
multimodal benchmark
road inspection
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-language reasoning
interactive VQA
pavement distress benchmark
multimodal perception
agent-augmented VQA
🔎 Similar Papers
No similar papers found.
D
Dexiang Li
Harbin Institute of Technology, Shenzhen, China
Z
Zhenning Che
Harbin Institute of Technology, Shenzhen, China
Haijun Zhang
Haijun Zhang
Professor, IEEE Fellow, University of Science and Technology Beijing
6GAI enabled Wireless CommunicationsResource AllocationMobility Management
D
Dongliang Zhou
Tianjin University, Tianjin, China
Z
Zhao Zhang
Hefei University of Technology, Hefei, China
Yahong Han
Yahong Han
Professor of Computer Science, Tianjin University
Multimedia