AD-Copilot: A Vision-Language Assistant for Industrial Anomaly Detection via Visual In-context Comparison

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of general-purpose multimodal large language models in industrial anomaly detection, where domain shift and insensitivity to subtle visual discrepancies hinder performance. To overcome these challenges, the authors propose AD-Copilot, a specialized vision-language assistant that introduces an innovative intra-visual contextual contrast mechanism and a Comparison Encoder architecture. They also construct Chat-AD, a large-scale, semantically rich industrial multimodal dataset, and establish MMAD-BBox, a new benchmark with bounding-box annotations. Through multi-stage training, cross-attention mechanisms, and bounding-box-based evaluation, AD-Copilot achieves 82.3% accuracy on MMAD and demonstrates up to a 3.35× improvement over baseline methods on MMAD-BBox, surpassing human experts on certain tasks and exhibiting exceptional generalization capability.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have achieved impressive success in natural visual understanding, yet they consistently underperform in industrial anomaly detection (IAD). This is because MLLMs trained mostly on general web data differ significantly from industrial images. Moreover, they encode each image independently and can only compare images in the language space, making them insensitive to subtle visual differences that are key to IAD. To tackle these issues, we present AD-Copilot, an interactive MLLM specialized for IAD via visual in-context comparison. We first design a novel data curation pipeline to mine inspection knowledge from sparsely labeled industrial images and generate precise samples for captioning, VQA, and defect localization, yielding a large-scale multimodal dataset Chat-AD rich in semantic signals for IAD. On this foundation, AD-Copilot incorporates a novel Comparison Encoder that employs cross-attention between paired image features to enhance multi-image fine-grained perception, and is trained with a multi-stage strategy that incorporates domain knowledge and gradually enhances IAD skills. In addition, we introduce MMAD-BBox, an extended benchmark for anomaly localization with bounding-box-based evaluation. The experiments show that AD-Copilot achieves 82.3% accuracy on the MMAD benchmark, outperforming all other models without any data leakage. In the MMAD-BBox test, it achieves a maximum improvement of $3.35\times$ over the baseline. AD-Copilot also exhibits excellent generalization of its performance gains across other specialized and general-purpose benchmarks. Remarkably, AD-Copilot surpasses human expert-level performance on several IAD tasks, demonstrating its potential as a reliable assistant for real-world industrial inspection. All datasets and models will be released for the broader benefit of the community.
Problem

Research questions and friction points this paper is trying to address.

Industrial Anomaly Detection
Multimodal Large Language Models
Visual In-context Comparison
Fine-grained Perception
Anomaly Localization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual In-context Comparison
Comparison Encoder
Industrial Anomaly Detection
Multimodal Large Language Model
Cross-attention
🔎 Similar Papers
No similar papers found.