🤖 AI Summary
This work addresses the limitations of existing pavement distress assessment methods, which are predominantly confined to single-modality visual tasks and lack capabilities in multi-turn interaction, quantitative analysis, and fact-based reasoning—hindering their utility in real-world maintenance decision-making. To bridge this gap, we propose PaveBench, a unified multimodal benchmark supporting classification, detection, segmentation, and visual question answering (VQA). We introduce PaveVQA, a large-scale dataset of real-world highway images, and pioneer expert-refined multi-turn VQA for pavement distress analysis. Our framework integrates domain-specific tools via agent-augmented VQA and incorporates a hard distractor subset to enhance model robustness. Through standardized task definitions and evaluation protocols, we systematically benchmark state-of-the-art approaches and release the dataset and code to advance research in intelligent infrastructure inspection and multimodal interaction.
📝 Abstract
Pavement condition assessment is essential for road safety and maintenance. Existing research has made significant progress. However, most studies focus on conventional computer vision tasks such as classification, detection, and segmentation. In real-world applications, pavement inspection requires more than visual recognition. It also requires quantitative analysis, explanation, and interactive decision support. Current datasets are limited. They focus on unimodal perception. They lack support for multi-turn interaction and fact-grounded reasoning. They also do not connect perception with vision-language analysis. To address these limitations, we introduce PaveBench, a large-scale benchmark for pavement distress perception and interactive vision-language analysis on real-world highway inspection images. PaveBench supports four core tasks: classification, object detection, semantic segmentation, and vision-language question answering. It provides unified task definitions and evaluation protocols. On the visual side, PaveBench provides large-scale annotations and includes a curated hard-distractor subset for robustness evaluation. It contains a large collection of real-world pavement images. On the multimodal side, we introduce PaveVQA, a real-image question answering (QA) dataset that supports single-turn, multi-turn, and expert-corrected interactions. It covers recognition, localization, quantitative estimation, and maintenance reasoning. We evaluate several state-of-the-art methods and provide a detailed analysis. We also present a simple and effective agent-augmented visual question answering framework that integrates domain-specific models as tools alongside vision-language models. The dataset is available at: https://huggingface.co/datasets/MML-Group/PaveBench.