CEDL: Centre-Enhanced Discriminative Learning for Anomaly Detection

๐Ÿ“… 2025-11-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing supervised anomaly detection methods rely on known anomalous samples and thus generalize poorly to out-of-distribution anomalies; moreover, their anomaly scores lack geometric interpretability and probabilistic semantics. To address these limitations, we propose Center-Enhanced Discriminative Learning (CEDL), a unified framework that jointly models the geometric structure of normality and the decision boundary. CEDL embeds a radial distance function into the prediction logits, enabling anomaly scores to directly represent the geometric distance from a sample to the normal class centerโ€”endowing them with intrinsic interpretability and calibration-free probabilistic meaning. Furthermore, we introduce a centering-based distance reparameterization, allowing end-to-end joint optimization of both the representation space and the discriminative objective. Experiments across tabular, time-series, and image benchmarks demonstrate that CEDL achieves consistently state-of-the-art performance, significantly improving detection accuracy for both known and unseen anomalies while enhancing generalization capability.

Technology Category

Application Category

๐Ÿ“ Abstract
Supervised anomaly detection methods perform well in identifying known anomalies that are well represented in the training set. However, they often struggle to generalise beyond the training distribution due to decision boundaries that lack a clear definition of normality. Existing approaches typically address this by regularising the representation space during training, leading to separate optimisation in latent and label spaces. The learned normality is therefore not directly utilised at inference, and their anomaly scores often fall within arbitrary ranges that require explicit mapping or calibration for probabilistic interpretation. To achieve unified learning of geometric normality and label discrimination, we propose Centre-Enhanced Discriminative Learning (CEDL), a novel supervised anomaly detection framework that embeds geometric normality directly into the discriminative objective. CEDL reparameterises the conventional sigmoid-derived prediction logit through a centre-based radial distance function, unifying geometric and discriminative learning in a single end-to-end formulation. This design enables interpretable, geometry-aware anomaly scoring without post-hoc thresholding or reference calibration. Extensive experiments on tabular, time-series, and image data demonstrate that CEDL achieves competitive and balanced performance across diverse real-world anomaly detection tasks, validating its effectiveness and broad applicability.
Problem

Research questions and friction points this paper is trying to address.

Improves generalization beyond training distribution in anomaly detection
Unifies geometric normality learning with discriminative objective functions
Enables interpretable anomaly scoring without post-processing calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embeds geometric normality into discriminative objective
Reparameterizes prediction logit via radial distance function
Unifies geometric and discriminative learning end-to-end
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Zahra Zamanzadeh Darban
DSAI, Monash University, Melbourne, Victoria, Australia
Qizhou Wang
Qizhou Wang
PhD @ HKBU
machine learning
C
Charu C. Aggarwal
IBM T. J. Watson Research Center, IBM, Yorktown Heights, NY, USA
G
Geoffrey I. Webb
DSAI, Monash University, Melbourne, Victoria, Australia
Ehsan Abbasnejad
Ehsan Abbasnejad
Assoc. Prof. Monash University
Machine learningResponsible machine learningVision and LanguageMachine ReasoningBayesian
Mahsa Salehi
Mahsa Salehi
Senior Lecturer, Monash University
Anomaly DetectionTime Series AnalysisMachine LearningBrain EEG Analysis