🤖 AI Summary
Weak cross-domain generalization severely hinders the practical deployment of automated pavement defect detection. To address this, we propose the first unsupervised self-supervised visual prompting framework tailored for road damage detection: it generates defect-aware visual prompts on unlabeled target-domain images to guide representation adaptation of a frozen Vision Transformer (ViT) backbone. We introduce two key innovations—the Self-supervised Prompt Enhancement Module (SPEM) and the Domain-Aware Prompt Alignment (DAPA) strategy—enabling focused defect feature learning and cross-domain representation alignment. Our method requires no annotations, learning prompts solely from target-domain imagery. Evaluated on four benchmark datasets, it achieves superior zero-shot transfer performance over state-of-the-art supervised, self-supervised, and domain-adaptation methods, while significantly improving few-shot adaptation efficiency and cross-domain robustness.
📝 Abstract
The deployment of automated pavement defect detection is often hindered by poor cross-domain generalization. Supervised detectors achieve strong in-domain accuracy but require costly re-annotation for new environments, while standard self-supervised methods capture generic features and remain vulnerable to domain shift. We propose ours, a self-supervised framework that emph{visually probes} target domains without labels. ours introduces a Self-supervised Prompt Enhancement Module (SPEM), which derives defect-aware prompts from unlabeled target data to guide a frozen ViT backbone, and a Domain-Aware Prompt Alignment (DAPA) objective, which aligns prompt-conditioned source and target representations. Experiments on four challenging benchmarks show that ours consistently outperforms strong supervised, self-supervised, and adaptation baselines, achieving robust zero-shot transfer, improved resilience to domain variations, and high data efficiency in few-shot adaptation. These results highlight self-supervised prompting as a practical direction for building scalable and adaptive visual inspection systems. Source code is publicly available: https://github.com/xixiaouab/PROBE/tree/main