🤖 AI Summary
This work addresses the challenge of detecting out-of-distribution anomalies—i.e., completely unseen abnormal classes—when only normal data and a very limited number of anomalous samples are available. To this end, the authors propose a multi-directional meta-learning framework that learns a manifold representation of normal data in the inner loop and dynamically calibrates the decision boundary in the outer loop using scarce anomalous samples through bi-directional optimization. This is the first approach to integrate multi-directional meta-learning with bi-directional optimization, significantly enhancing the model’s generalization capability to unknown anomaly types. Experimental results across multiple benchmarks demonstrate that the proposed method effectively improves out-of-distribution anomaly detection performance.
📝 Abstract
In this paper, we address the problem of class-generalizable anomaly detection, where the objective is to develop a unified model by focusing our learning on the available normal data and a small amount of anomaly data in order to detect the completely unseen anomalies, also referred to as the out-of-distribution (OOD) classes. Adding to this challenge is the fact that the anomaly data is rare and costly to label. To achieve this, we propose a multidirectional meta-learning algorithm -- at the inner level, the model aims to learn the manifold of the normal data (representation); at the outer level, the model is meta-tuned with a few anomaly samples to maximize the softmax confidence margin between the normal and anomaly samples (decision surface calibration), treating normals as in-distribution (ID) and anomalies as out-of-distribution (OOD). By iteratively repeating this process over multiple episodes of predominantly normal and a small number of anomaly samples, we realize a multidirectional meta-learning framework. This two-level optimization, enhanced by multidirectional training, enables stronger generalization to unseen anomaly classes.