ROAST: Risk-aware Outlier-exposure for Adversarial Selective Training of Anomaly Detectors Against Evasion Attacks

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of conventional anomaly detection methods, which treat all benign data equally during training and thereby overlook noise induced by inter-patient physiological variability, leading to insufficient robustness and low recall. To overcome this, the authors propose ROAST, a novel framework that uniquely integrates risk-aware selective training with adversarial anomaly exposure. ROAST first identifies and selects low-risk patient data through a risk assessment mechanism and then enhances model robustness by injecting adversarial samples to explicitly expose the model to anomalous patterns. By focusing training on high-quality, low-noise data, ROAST substantially improves detection performance while reducing computational overhead. Experimental results demonstrate that ROAST achieves an average recall improvement of 16.2% and reduces training time by 88.3%, all while maintaining comparable precision.
📝 Abstract
Safety-critical domains like healthcare rely on deep neural networks (DNNs) for prediction, yet DNNs remain vulnerable to evasion attacks. Anomaly detectors (ADs) are widely used to protect DNNs, but conventional ADs are trained indiscriminately on benign data from all patients, overlooking physiological differences that introduce noise, degrade robustness, and reduce recall. In this paper, we propose ROAST, a novel risk-aware outlier exposure selective training framework that improves AD recall without sacrificing precision. ROAST identifies patients who are less vulnerable to attack and focuses training on these cleaner, more reliable data, thereby reducing false negatives and improving recall. To preserve precision, the framework applies outlier exposure by injecting adversarial samples into the training set of the less vulnerable patients, avoiding noisy data from others. Experiments show that ROAST increases recall by 16.2\% while reducing the training time by 88.3\% on average compared to indiscriminate training, with minimal impact on precision.
Problem

Research questions and friction points this paper is trying to address.

evasion attacks
anomaly detectors
physiological differences
recall degradation
training noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

risk-aware training
adversarial selective training
outlier exposure
anomaly detection
evasion attacks
🔎 Similar Papers
No similar papers found.
M
Mohammed Elnawawy
Electrical and Computer Engineering Department, University of British Columbia, Vancouver, BC, Canada
G
Gargi Mitra
Electrical and Computer Engineering Department, University of British Columbia, Vancouver, BC, Canada
Shahrear Iqbal
Shahrear Iqbal
Research Officer, National Research Council (NRC) Canada
Security and Privacy
Karthik Pattabiraman
Karthik Pattabiraman
Professor, Electrical and Computer Engineering, University of British Columbia
DependabilityDependable ComputingDependable systemsFault injectionCyber-Physical Systems Security