Visual Anomaly Detection under Complex View-Illumination Interplay: A Large-Scale Benchmark

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual anomaly detection (VAD) benchmarks largely overlook the critical impact of coupled viewpoint and illumination variations on defect visibility, leading to distorted robustness evaluation. To address this, we introduce M2AD—the first large-scale multi-view, multi-illumination VAD benchmark—comprising 119,880 high-resolution images systematically covering diverse viewpoint–illumination configurations. We explicitly model and evaluate the viewpoint–illumination coupling effect, proposing two complementary evaluation protocols: Synergy (cross-configuration feature fusion) and Invariant (single-image invariance). Built upon a synchronized multi-camera array, programmable multi-source lighting, and industrial-grade sample acquisition, M2AD provides standardized annotations and a reproducible testing framework. Extensive experiments reveal substantial performance degradation of state-of-the-art VAD methods on M2AD, underscoring its importance in advancing robust, real-world VAD research.

Technology Category

Application Category

📝 Abstract
The practical deployment of Visual Anomaly Detection (VAD) systems is hindered by their sensitivity to real-world imaging variations, particularly the complex interplay between viewpoint and illumination which drastically alters defect visibility. Current benchmarks largely overlook this critical challenge. We introduce Multi-View Multi-Illumination Anomaly Detection (M2AD), a new large-scale benchmark comprising 119,880 high-resolution images designed explicitly to probe VAD robustness under such interacting conditions. By systematically capturing 999 specimens across 10 categories using 12 synchronized views and 10 illumination settings (120 configurations total), M2AD enables rigorous evaluation. We establish two evaluation protocols: M2AD-Synergy tests the ability to fuse information across diverse configurations, and M2AD-Invariant measures single-image robustness against realistic view-illumination effects. Our extensive benchmarking shows that state-of-the-art VAD methods struggle significantly on M2AD, demonstrating the profound challenge posed by view-illumination interplay. This benchmark serves as an essential tool for developing and validating VAD methods capable of overcoming real-world complexities. Our full dataset and test suite will be released at https://hustcyq.github.io/M2AD to facilitate the field.
Problem

Research questions and friction points this paper is trying to address.

Addressing VAD sensitivity to view-illumination interplay in real-world imaging
Introducing M2AD benchmark for robust anomaly detection under varying conditions
Evaluating VAD methods' performance against complex view-illumination effects
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Multi-View Multi-Illumination Anomaly Detection (M2AD) benchmark
Systematically captures 999 specimens with 120 configurations
Establishes two evaluation protocols: M2AD-Synergy and M2AD-Invariant
🔎 Similar Papers
No similar papers found.
Yunkang Cao
Yunkang Cao
Hunan University
Visual Anomaly DetectionIndustrial Foundation ModelEmbodied Intelligence
Yuqi Cheng
Yuqi Cheng
Huazhong University of Science and Technology
3D MeasurementPoint Cloud ProcessAnomaly Detection
Xiaohao Xu
Xiaohao Xu
Google; University of Michigan, Ann Arbor
Robust Visual IntelligenceAnomaly DetectionVideo&3DComputer VisionRobotics
Y
Yiheng Zhang
State Key Laboratory of Intelligent Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China
Yihan Sun
Yihan Sun
Assistant Professor, University of California, Riverside
Parallel Algorithms
Y
Yuxiang Tan
State Key Laboratory of Intelligent Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China
Y
Yuxin Zhang
State Key Laboratory of Intelligent Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China
X
Xiaonan Huang
Robotics Department, University of Michigan, Ann Arbor, MI 48109 USA
Weiming Shen
Weiming Shen
Huazhong University of Science and Technology