🤖 AI Summary
This paper addresses unsupervised anomaly detection by proposing Mass Repulsing Optimal Transport (MROT), a novel paradigm. Unlike classical optimal transport—where the self-mapping is constrained to be the identity—MROT introduces a sample-quality repulsion mechanism: it enforces high transportation costs for samples in low-density regions while minimizing the overall transport cost, thereby yielding a differentiable, geometry-driven anomaly score. The method is fully unsupervised, requiring neither labels nor generative models; it is grounded in the Wasserstein distance and supports end-to-end differentiable optimization. Evaluated on multiple standard benchmarks and industrial fault detection tasks, MROT achieves average AUC improvements of 3.2–7.8 percentage points over state-of-the-art unsupervised approaches, demonstrating significant performance gains and robustness.
📝 Abstract
Detecting anomalies in datasets is a longstanding problem in machine learning. In this context, anomalies are defined as a sample that significantly deviates from the remaining data. Meanwhile, optimal transport (OT) is a field of mathematics concerned with the transportation, between two probability measures, at least effort. In classical OT, the optimal transportation strategy of a measure to itself is the identity. In this paper, we tackle anomaly detection by forcing samples to displace its mass, while keeping the least effort objective. We call this new transportation problem Mass Repulsing Optimal Transport (MROT). Naturally, samples lying in low density regions of space will be forced to displace mass very far, incurring a higher transportation cost. We use these concepts to design a new anomaly score. Through a series of experiments in existing benchmarks, and fault detection problems, we show that our algorithm improves over existing methods.