🤖 AI Summary
This paper addresses the problem that conformal prediction for anomaly detection becomes overly conservative and suffers reduced statistical power when reference data contain a small number of non-adversarial, unlabeled anomalies (i.e., benign contamination). To tackle this, we propose an active data cleaning framework that integrates active learning with conformal inference under limited labeling budgets, enabling efficient identification and removal of suspicious anomalies while preserving statistical validity and computational efficiency. Theoretically, we prove that the cleaned procedure maintains strict control of the Type-I error rate even under contamination, and we provide the first characterization of the intrinsic conservatism mechanism of conformal methods under such contamination. Empirically, data cleaning significantly improves detection power while ensuring that error rates consistently satisfy the prescribed confidence level. To our knowledge, this is the first systematic framework unifying active learning and conformal inference to deliver robust statistical guarantees for anomaly detection.
📝 Abstract
Conformal prediction is a flexible framework for calibrating machine learning predictions, providing distribution-free statistical guarantees. In outlier detection, this calibration relies on a reference set of labeled inlier data to control the type-I error rate. However, obtaining a perfectly labeled inlier reference set is often unrealistic, and a more practical scenario involves access to a contaminated reference set containing a small fraction of outliers. This paper analyzes the impact of such contamination on the validity of conformal methods. We prove that under realistic, non-adversarial settings, calibration on contaminated data yields conservative type-I error control, shedding light on the inherent robustness of conformal methods. This conservativeness, however, typically results in a loss of power. To alleviate this limitation, we propose a novel, active data-cleaning framework that leverages a limited labeling budget and an outlier detection model to selectively annotate data points in the contaminated reference set that are suspected as outliers. By removing only the annotated outliers in this ``suspicious'' subset, we can effectively enhance power while mitigating the risk of inflating the type-I error rate, as supported by our theoretical analysis. Experiments on real datasets validate the conservative behavior of conformal methods under contamination and show that the proposed data-cleaning strategy improves power without sacrificing validity.