🤖 AI Summary
This work proposes a novel paradigm for anomaly detection, termed On-Model AD, which leverages the intrinsic knowledge of the primary model—specifically, the normal output ranges of its neurons—to detect anomalies without requiring a separate, dedicated detection model. Traditional approaches often deploy independent anomaly detectors, overlooking the rich distributional information already embedded within the main model, thereby introducing redundancy and inefficiency. In contrast, the proposed method eliminates the need for additional training or deployment overhead. Building upon this paradigm, we introduce RangeAD, an algorithm that achieves strong detection performance in high-dimensional tasks while substantially reducing inference costs, effectively balancing accuracy and computational efficiency.
📝 Abstract
In practice, machine learning methods commonly require anomaly detection (AD) to filter inputs or detect distributional shifts. Typically, this is implemented by running a separate AD model alongside the primary model. However, this separation ignores the fact that the primary model already encodes substantial information about the target distribution. In this paper, we introduce On-Model AD, a setting for anomaly detection that explicitly leverages access to a related machine learning model. Within this setting, we propose RangeAD, an algorithm that utilizes neuron-wise output ranges derived from the primary model. RangeAD achieves superior performance even on high-dimensional tasks while incurring substantially lower inference costs. Our results demonstrate the potential of the On-Model AD setting as a practical framework for efficient anomaly detection.