🤖 AI Summary
This paper addresses event-based anomaly detection in steady-state systems—identifying potentially malicious activities deviating from historical normal behavior. We propose a hypergraph-guided, interpretable pattern modeling method. Our approach comprises three key contributions: (1) a hypergraph learning framework that automatically infers equivalence classes of behaviorally similar entities; (2) synthesis of highly readable, semantically grounded regular expression patterns from these equivalence classes, enabling intuitive, root-cause–aware anomaly attribution; and (3) an unsupervised, computationally efficient design supporting real-time inference. Evaluated on five real-world system datasets, our method achieves 1.2× higher precision and 1.3× higher recall than state-of-the-art deep learning baselines, while improving training and inference efficiency by an order of magnitude. Crucially, it operates effectively on a single CPU core, enabling lightweight deployment in resource-constrained environments.
📝 Abstract
We propose HyGLAD, a novel algorithm that automatically builds a set of interpretable patterns that model event data. These patterns can then be used to detect event-based anomalies in a stationary system, where any deviation from past behavior may indicate malicious activity. The algorithm infers equivalence classes of entities with similar behavior observed from the events, and then builds regular expressions that capture the values of those entities. As opposed to deep-learning approaches, the regular expressions are directly interpretable, which also translates to interpretable anomalies. We evaluate HyGLAD against all 7 unsupervised anomaly detection methods from DeepOD on five datasets from real-world systems. The experimental results show that on average HyGLAD outperforms existing deep-learning methods while being an order of magnitude more efficient in training and inference (single CPU vs GPU). Precision improved by 1.2x and recall by 1.3x compared to the second-best baseline.