End-to-End Augmentation Hyperparameter Tuning for Self-Supervised Anomaly Detection

📅 2023-06-21
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-supervised anomaly detection (SSAD) methods rely on manually tuned data augmentation strategies and lack label-free validation criteria. To address these limitations, this paper proposes an end-to-end differentiable augmentation framework jointly optimized with an unsupervised validation loss. Our key contributions are: (1) the first differentiable augmentation module specifically designed for SSAD, enabling gradient-based automatic optimization of augmentation hyperparameters; and (2) a novel validation loss grounded in contrastive learning and unsupervised feature alignment, which evaluates augmentation quality without any ground-truth labels. Extensive experiments on semantic-level anomaly and industrial micro-defect benchmarks demonstrate that our method consistently outperforms state-of-the-art SSAD approaches, achieving significant improvements in both detection accuracy and robustness.
📝 Abstract
Self-supervised learning (SSL) has emerged as a promising paradigm that presents supervisory signals to real-world problems, bypassing the extensive cost of manual labeling. Consequently, self-supervised anomaly detection (SSAD) has seen a recent surge of interest, since SSL is especially attractive for unsupervised tasks. However, recent works have reported that the choice of a data augmentation function has significant impact on the accuracy of SSAD, posing augmentation search as an essential but nontrivial problem with the lack of labeled validation data. In this paper, we introduce ST-SSAD, the first systematic approach for rigorous augmentation tuning on SSAD. To this end, our work presents two key contributions. The first is a new unsupervised validation loss that quantifies the alignment between augmented training data and unlabeled validation data. The second is new differentiable augmentation functions, allowing data augmentation hyperparameter(s) to be tuned in an end-to-end manner. Experiments on two testbeds with semantic class anomalies and subtle industrial defects show that ST-SSAD gives significant performance gains over existing works.
Problem

Research questions and friction points this paper is trying to address.

Optimizing data augmentation for self-supervised anomaly detection.
Developing unsupervised validation loss for augmentation alignment.
Introducing differentiable augmentation functions for end-to-end tuning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised validation loss for data alignment
Differentiable augmentation functions for tuning
End-to-end augmentation hyperparameter tuning
🔎 Similar Papers
No similar papers found.