🤖 AI Summary
This paper addresses security threats to image models during both training (e.g., data poisoning) and inference (e.g., evasion, impersonation, and model inversion attacks) by proposing a semi-supervised anomaly detection framework based on 2D signatures. The method innovatively integrates rough path theory with 2D signature representation to encode local image structures as high-order geometric features, enabling lightweight adversarial perturbation detection without requiring fully labeled data. Designed for robustness and efficiency, the framework achieves superior detection performance across diverse attack scenarios compared to state-of-the-art methods, while significantly reducing computational overhead—accelerating perturbation identification by 37%.
📝 Abstract
The rapid advancement of machine learning technologies raises questions about the security of machine learning models, with respect to both training-time (poisoning) and test-time (evasion, impersonation, and inversion) attacks. Models performing image-related tasks, e.g. detection, and classification, are vulnerable to adversarial attacks that can degrade their performance and produce undesirable outcomes. This paper introduces a novel technique for anomaly detection in images called 2DSig-Detect, which uses a 2D-signature-embedded semi-supervised framework rooted in rough path theory. We demonstrate our method in adversarial settings for training-time and test-time attacks, and benchmark our framework against other state of the art methods. Using 2DSig-Detect for anomaly detection, we show both superior performance and a reduction in the computation time to detect the presence of adversarial perturbations in images.