Towards Real Unsupervised Anomaly Detection Via Confident Meta-Learning

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing unsupervised anomaly detection methods are, in practice, semi-supervised: they rely on manually curated anomaly-free training sets, introducing human bias and limiting generalization. This work proposes Confident Meta-learning (CoMet), the first framework enabling truly unsupervised end-to-end learning directly on raw data containing unknown anomalies. CoMet integrates Soft Confident Learning with Meta-Learning: it suppresses noisy samples via confidence-weighted loss and enhances robustness through meta-validation loss covariance regularization. The framework is model-agnostic, fully differentiable, and compatible with mainstream backbone architectures. Extensive experiments on MVTec-AD, VIADUCT, and KSDD2 benchmarks demonstrate significant improvements over state-of-the-art methods—achieving new performance records across multiple metrics. Crucially, CoMet exhibits strong robustness to varying proportions of anomalies in training data, confirming its practical viability in real-world scenarios where clean normal data cannot be guaranteed.

Technology Category

Application Category

📝 Abstract
So-called unsupervised anomaly detection is better described as semi-supervised, as it assumes all training data are nominal. This assumption simplifies training but requires manual data curation, introducing bias and limiting adaptability. We propose Confident Meta-learning (CoMet), a novel training strategy that enables deep anomaly detection models to learn from uncurated datasets where nominal and anomalous samples coexist, eliminating the need for explicit filtering. Our approach integrates Soft Confident Learning, which assigns lower weights to low-confidence samples, and Meta-Learning, which stabilizes training by regularizing updates based on training validation loss covariance. This prevents overfitting and enhances robustness to noisy data. CoMet is model-agnostic and can be applied to any anomaly detection method trainable via gradient descent. Experiments on MVTec-AD, VIADUCT, and KSDD2 with two state-of-the-art models demonstrate the effectiveness of our approach, consistently improving over the baseline methods, remaining insensitive to anomalies in the training set, and setting a new state-of-the-art across all datasets.
Problem

Research questions and friction points this paper is trying to address.

Enables anomaly detection with uncurated mixed nominal and anomalous data
Uses confident meta-learning to reduce bias from manual data filtering
Improves robustness and performance across diverse anomaly detection datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Confident Meta-learning for uncurated datasets
Soft Confident Learning weights low-confidence samples
Meta-Learning stabilizes training via validation loss
🔎 Similar Papers
No similar papers found.