🤖 AI Summary
Existing adversarial missingness (AM) attacks are restricted to full-information maximum likelihood estimation (MLE), limiting their applicability to widely used non-MLE missing-data handling methods such as complete-case analysis, mean imputation, and regression imputation.
Method: We propose a probabilistic missingness mechanism grounded in asymptotic statistics and develop an efficient bilevel optimization framework that avoids combinatorial search. By perturbing the missingness pattern, our attack disrupts parameter inference in generalized linear models.
Contribution/Results: This work is the first to extend AM attacks beyond MLE to multiple mainstream imputation and deletion strategies. On real-world datasets (e.g., California Housing), injecting <20% adversarial missingness suffices to invalidate the statistical significance (p-values) of key features. The attack exhibits strong robustness against data-value-based defenses. Our framework establishes the first scalable, general-purpose adversarial paradigm for missing-data processing pipelines, breaking the long-standing dependency of AM attacks on MLE assumptions.
📝 Abstract
Missing data is commonly encountered in practice, and when the missingness is non-ignorable, effective remediation depends on knowledge of the missingness mechanism. Learning the underlying missingness mechanism from the data is not possible in general, so adversaries can exploit this fact by maliciously engineering non-ignorable missingness mechanisms. Such Adversarial Missingness (AM) attacks have only recently been motivated and introduced, and then successfully tailored to mislead causal structure learning algorithms into hiding specific cause-and-effect relationships. However, existing AM attacks assume the modeler (victim) uses full-information maximum likelihood methods to handle the missing data, and are of limited applicability when the modeler uses different remediation strategies. In this work we focus on associational learning in the context of AM attacks. We consider (i) complete case analysis, (ii) mean imputation, and (iii) regression-based imputation as alternative strategies used by the modeler. Instead of combinatorially searching for missing entries, we propose a novel probabilistic approximation by deriving the asymptotic forms of these methods used for handling the missing entries. We then formulate the learning of the adversarial missingness mechanism as a bi-level optimization problem. Experiments on generalized linear models show that AM attacks can be used to change the p-values of features from significant to insignificant in real datasets, such as the California-housing dataset, while using relatively moderate amounts of missingness (<20%). Additionally, we assess the robustness of our attacks against defense strategies based on data valuation.