🤖 AI Summary
Pure vision-based road distress detection methods suffer from insufficient semantic and contextual understanding due to the absence of textual cues, leading to suboptimal performance on complex defects. To address this, we introduce RoadBench—the first multimodal benchmark for road distress understanding—comprising high-resolution imagery paired with fine-grained textual descriptions. Built upon the CLIP architecture, we propose RoadCLIP: a novel model incorporating lesion-aware positional encoding to enforce spatial-semantic alignment and integrating domain-specific road condition priors. Additionally, we design a GPT-driven image-text pair generation pipeline to enhance data diversity and annotation quality. Experiments demonstrate that our approach achieves a 19.2% accuracy improvement over the best pure-vision baseline, establishing new state-of-the-art performance on road distress identification. This work validates the critical role of multimodal co-modeling in intelligent infrastructure diagnostics.
📝 Abstract
Accurate road damage detection is crucial for timely infrastructure maintenance and public safety, but existing vision-only datasets and models lack the rich contextual understanding that textual information can provide. To address this limitation, we introduce RoadBench, the first multimodal benchmark for comprehensive road damage understanding. This dataset pairs high resolution images of road damages with detailed textual descriptions, providing a richer context for model training. We also present RoadCLIP, a novel vision language model that builds upon CLIP by integrating domain specific enhancements. It includes a disease aware positional encoding that captures spatial patterns of road defects and a mechanism for injecting road-condition priors to refine the model's understanding of road damages. We further employ a GPT driven data generation pipeline to expand the image to text pairs in RoadBench, greatly increasing data diversity without exhaustive manual annotation. Experiments demonstrate that RoadCLIP achieves state of the art performance on road damage recognition tasks, significantly outperforming existing vision-only models by 19.2%. These results highlight the advantages of integrating visual and textual information for enhanced road condition analysis, setting new benchmarks for the field and paving the way for more effective infrastructure monitoring through multimodal learning.